SCOTT D Miller - For the latest and greatest information on Feedback Informed Treatment

  • About
    • About Scott
    • Publications
  • Training and Consultation
  • Workshop Calendar
  • FIT Measures Licensing
  • FIT Software Tools
  • Online Store
  • Top Performance Blog
  • Contact Scott
scottdmiller@ talkingcure.com +1.773.454.8511

Reducing Dropout and Unplanned Terminations in Mental Health Services

May 12, 2021 By scottdm Leave a Comment

Being a mental health professional is a lot like being a parent.

Please read that last statement carefully before drawing any conclusions!GROUNDHOG DAY

I did not say mental health services are similar to parenting.  Rather, despite their best efforts, therapists, like parents, routinely feel they fall short of their hopes and objectives.  To be sure, research shows both enjoy their respective roles (1, 2).  That said, they frequently are left with the sense that no matter how much they do, its never good enough.  A recent poll found, for example 60% parents feel they fail their children in first years of life.   And given the relatively high level of turnover on a typical clinician’s caseload — with a worldwide average of 5 to 6 sessions — what is therapy if not a kind of Goundhog Day repetition of being a new parent?

For therapists, such feelings are compounded by the number of clients who, without notice or warning, stop coming to treatment.   Besides the obvious impact on productivity and income, the evidence shows such unplanned endings negatively impact clinicians’ self worth, ranking third among the top 25 most stressful client behaviors (3, p. 15).

Recent, large scale meta-analytic studies indicate one in five, or 20% (4) of clients, dropout of care — a figure that is slightly higher for adolescents and children (5).  However, when defined as “clients who discontinue unilaterally without experiencing a reliable or clinically significant improvement in the problem that originally led them to seek treatment,” the rate is much higher (6)!

Feeling “not good enough” yet?

smart kidBy the way, if you are thinking, “that’s not true of my caseload as hardly any of the people I see, dropout”  or “my success rate is much higher than the figure just cited,” recall that parent who always acts as though their child is the cutest, smartest or most talented in class.  Besides such behavior being unbecoming, it often displays a lack of awareness of the facts.

So, turning to the evidence, data indicate therapists routinely overestimate their effectiveness, with a staggering 96% ranking their outcomes “above average (7)!”   And while the same “rose colored glasses” may cause us to underestimate the number of clients who terminate without notice, a more troubling reality may be the relatively large number who don’t dropout despite experiencing no measurable benefit from our work with them– up to 25%, research suggests.

What to do?

As author Alex Dumas once famously observed, “Nothing succeeds like success.”  And when it comes addressing dropout, a recent, independent meta-analysis of 58 studies involving nearly 22,000 clients found Feedback-Informed Treatment (FIT) resulted in a 15% reduction in the number people who end psychotherapy without benefit (8).  The same study — and another recent one (9) –documented FIT helps therapists respond more effectively to clients most at risk of staying for extended periods of time without benefit.

Will FIT prevent you from ever feeling “not good enough” again?  Probably not.   But as most parents with grown children say, “looking back, it was worth it.”

OK, that’s it for now,

Scott

Scott D. Miller Ph.D.
Director, International Center for Clinical Excellence

P.S.: If you are looking for support with your implementation of Feedback-Informed Treatment in your practice or agency, join colleagues from around the world in our upcoming online trainings.  
FIT Implementation Intensive 2021FIT Summer CAFÉ

 

 

 

 

 

 

 

Filed Under: Behavioral Health, Feedback Informed Treatment - FIT, Practice Based Evidence, Therapeutic Relationship

The Baader-Meinhof Effect in Trauma and Psychotherapy

August 28, 2019 By scottdm 35 Comments

noticingHave you heard of the “Baader-Meinhof” effect?  If not, I’m positive you’ll soon be seeing evidence of it everywhere.

That’s what “it” is, by the way — that curious experience of seeing something you’ve just noticed, been told of, or thought about, cropping up all around you.  So …

You buy a car and suddenly it’s everywhere.  That outfit you thought was so unique?  Boom!  Everyone is sporting it.  How about the conversation you just had with your friend?  You know, the one that was so stimulating and interesting?  Now the subject is on everyone’s lips.

Depending on your level of self-esteem or degree of narcissism, Baader-Meinhof either leaves you feeling on the “cutting edge” of cultural trends or constantly lagging behind others.  For me, it’s generally the latter.  And recently, its a feeling that has been dogging me a fair bit.

The subject?  Trauma.

Whether simple or complex, ongoing or one-off, experienced as a child or adult, trauma is the topic de jour — a cause célèbre linked to anCertified Trauma Professional ever-growing list of problems, including depression, anxiety, dissociation, insomnia, headaches, stomachaches, asthma, stroke, diabetes, and most recently, ADHD.

Then, of course, there are the offers for training.  Is it just me or is trauma the subject of every other email solicitation, podcast announcement, and printed flyer?

The truth is our field has been here many times before.  Over the last 25 years, depression, multiple personality disorder, rapid cycling bipolar disorder II, attention deficit disorder, and borderline personality disorder have all burst on the scene, enjoyed a period of intense professional interest, and then receded into the background.

Available evidence makes clear this pattern — aha, whoa, and hmm what’s next? — is far from benign.  While identifying who is suffering and why is an important and noble endeavor, outcomes of mental healthcare have not improved over the last 40 years.  What’s more, no evidence exists that training in treatment modalities specific to any particular diagnosis — the popularly-termed, “evidence-based” practices — improves effectiveness.  Problematically, studies do show undergoing such training increases practitioner perception of enhanced competence (Neimeyer, Taylor, & Cox, 2012) .

which wayOn more than one occasion, I’ve witnessed advocates of particular treatment methods claim it’s unethical for a therapist to work with people who’ve experienced a trauma if they haven’t been trained in a specific “trauma-focused” approach.  It’s a curious statement — one which, given the evidence, can only be meant to bully and shame practitioners into going along with the crowd.  Data on the subject are clear and date back over a decade (1, 2, 3).  In case of any doubt, a brand new review of the research, published in the journal Psychotherapy, concludes, “There are no clinically meaningful differences between … treatment methods for trauma … [including those] designed intentionally to omit components [believed essential to] effective treatments (viz., exposure, cognitive restructuring, and focus on trauma)” (p. 393).

If you find the results reported in the preceding paragraph confusing or unbelievable, recalling the “Baader-Meinhof” effect can be help.  It reminds us that despite its current popularity in professional discourse, trauma and its treatment is nothing new.  Truth is, therapists have always been helping those who’ve suffered its effects.  More, while the field’s outcomes have not improved over time, studies of real world practitioners show they generally achieve results on par with those obtained in studies of so-called evidence-based treatments 1, 2, 3).

Of course, none of the foregoing means nothing can be done to improve our effectiveness.  As my Swedish grandmother Stena used to say, “The room for improvement is the biggest one in our house!”  20190817_101819

To get started, or fine tune your professional development efforts, listen in to an interview I did recently with Elizabeth Irias from Clearly Clinical (an approved provider of CEU’s for APA, NBCC, NAADAC, CCAPP, and CAMFT).  Available here: What Every Therapist Needs To Know: Lessons From The Research, Ep. 61.  

In it, I lay out several, concrete, evidence-based steps, practitioners can take to improve their therapeutic effectiveness.  It’s FREE, plus you can earn a FREE hour of CE credit.  Additionally, if follow them on Instagram and leave a comment on this post, you’ll be automatically entered into a contest for one year of free, unlimited continuing education — the winner to be announced on October 31st, 2019.

Until next time,

Scott

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence
ICCE Fit Supervision Intensive 2020 Scott D MillerICCE Advanced FIT Intensive 2020 Scott D Miller

 

–

 

Filed Under: evidence-based practice, Feedback Informed Treatment - FIT, Practice Based Evidence

Responsiveness is “Job One” in Becoming a More Effective Therapist

June 28, 2019 By scottdm 4 Comments

face in cloudsLook at the picture to the left.  What do you see?

In no time at all, most report a large face with deep set eyes and slight frown.  

Actually, once seen, it’s difficult, if not impossible to unsee.  Try it.  Look away momentarily then back again.

Once set in motion, the process tends to take on a life of its own, with many other items coming into focus. 

Do you see the ghostly hand?  Skeletonized spine and rib cage?  Other eyes and faces?  A clown hat?

From an evolutionary perspective, the tendency to find patterns — be it in clouds, polished marble surfaces, burn marks on toast, or tea leaves in a cup — is easy to understand.  For our earliest ancestors, seeing eyes in the underbrush, whether real or illusory, had obvious survival value.   Whether or not the perceptions or predictions were accurate mattered less than the consequences of being wrong.   

In short, we are hardwired to look for and find patterns.  And, as researchers Foster and Kokko (2008) point out, “natural selection … favour[s] strategies that make many incorrect causal associations in order to establish those that are essential for survival …” (p. 36).   

As proof of the tendency to draw incorrect causal associations,flying couch one need only look at the field’s most popular beliefs and practices, many of which, the evidence shows, have little or no relationship to outcome.  These include:

  • Training in or use of evidence-based treatment approaches;
  • Participation in clinical supervision;
  • Attending continuing education workshops;
  • Professional degree, licensure, or amount of clinical experience;

Alas, all of the above, and more, are mere “faces in the clouds” — compelling to be sure, but more accurately seen as indicators of our desire to improve than reliable pathways to better results.  They are not.

So, what, if anything, can we do to improve our effectiveness?

According to researchers Stiles and Horvath (2017), “Certain therapists are more effective than others … because [they are] appropriately responsive … providing each client with a different, individually tailored treatment” (p. 71).   

Sounds good, right?  The recommendation that one should “fit the therapy to the person” is as old as the profession.   The challenge, of course, is knowing when to respond as well as whether any of the myriad “in-the-moment” adjustments we make in a given therapy hour actually help. 

That is until now.

EngagementConsider a new study involving 100’s real world therapists and more than 10,000 of their clients (Brown and Cazauvielh, 2019).  Intriguingly, the researchers found, therapists who were more “engaged” in formally seeking and utilizing feedback from their clients regarding progress and quality of care — as measured by the frequency with which they logged in to a computerized outcome management system to check their results — were significantly more effective. 

How much, you ask? 

Look at the graph above.  With an effect size difference of .4 σ, the feedback-informed practitioners (green curve) were on average more effective than 70% of their less engaged, less responsive peers (the red).

Such findings confirm and extend results from another study I blogged about back in May documenting that feedback-informed treatment, or FIT, led to significant improvements in the quality and strength of the therapeutic alliance.fitbit

Why some choose to actively utilize feedback to inform and improve the quality and outcome of care, while others dutifully administer measurement scales but ignore the results is currently unknown — that is, scientifically.  Could it really be that mysterious, however?  Many of us have exercise equipment stuffed into closets bought in the moment but never used.  In time, I suspect research will eventually point to the same factors responsible for implementation failures in other areas of life, both personal and professional (e.g., habit, lack of support, contextual barriers, etc.).

Until then, one thing we know helps is community.  Having like-minded to interact with and share experiences makes a difference when it comes to staying on track.  The International Center for Clinical Excellence is a free, social network with thousands of members around the world.  Every day, practitioners, managers, and supervisors meet to address questions and provide support to one another in their efforts to implement feedback-informed treatment.  Click on the link to connect today.

Still wanting more?  Listen to my interview with Gay Barfield, Ph.D., a colleague of Carl Rogers, with whom she co-directed the Carl Rogers Institute for Peace –an organization that applied person-centered principles to real and potential international crisis situations, and for which Dr. Rogers was nominated for the Nobel Peace Prize in 1987.  I know her words and being will inspire you to seek and use client feedback on a more regular basis…

OK, done for now,

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence

P.S.: Registration for the Spring 2020 Advanced and Supervision Intensives is open!  Both events sold out months in advance this year.  Click on the icons below for more information or to register.
ICCE Advanced FIT Intensive 2020 Scott D MillerICCE Fit Supervision Intensive 2020 Scott D Miller

 

 

 

 

 

Filed Under: evidence-based practice, Feedback Informed Treatment - FIT, Practice Based Evidence

Clinical Practice Guidelines: Beneficial Development or Bad Therapy?

December 4, 2017 By scottdm 13 Comments

complianceA couple of weeks ago, the American Psychological Association (APA) released clinical practice guidelines for the treatment of people diagnosed with post-traumatic stress disorder (PTSD).  “Developed over four years using a rigorous process,” according to an article in the APA Monitor, these are the first of many additional recommendations of specific treatment methods for particular psychiatric diagnoses to be published by the organization.

Almost immediately, controversy broke out.   On the Psychology Today blog, Clinical Associate Professor Jonathon Shedler, advised practitioners and patients to ignore the new guidelines, labeling them “bad therapy.”  Within a week, Professors Dean McKay and Scott Lilienfeld responded, lauding the guidelines a “significant advance for psychotherapy practice,” while repeatedly accusing Shedler of committing logical fallacies and misrepresenting the evidence.

One thing I know for sure, coming in at just over 700 pages, few if any practitioners will ever read the complete guideline and supportive appendices.  Beyond length, the way the information is presented–especially the lack of hypertext for cross referencing of the studies cited–seriously compromises any strainghtforward effort to review and verify evidentiary claims.

devil-in-the-detailIf, as the old saying goes, “the devil is in the details,” the level of mind-numbing minutae contained in the offical documents ensures he’ll remain well-hidden, tempting all but the most compulsive to accept the headlines on faith.

Consider the question of whether certain treatment approaches are more effective than others?  Page 1 of the Executive Summary identifies differential efficacy as a “key question” to be addressed by the Guideline.  Ultimately, four specific approaches are strongly recommended, being deemed more effective than…wait for it… scratchinghead“relaxation.”

My first thought is, “OK, curious comparison.”   Nevertheless, I read on.

Only by digging deep into the report, tracing the claim to the specific citations, and then using PsychNET, and another subscription service, to access the actual studies, is it possible to discover that in the vast majority of published trials reviewed, the four “strongly recommended” approaches were actually compared to nothing.  That’s right, nothing.

In the few studies that did include relaxation, the structure of that particular “treatment” precluded sufferers from talking directly about their traumatic experiences.   At this point, my curiosity gave way to chagrin.  Is it any wonder the four other approaches proved more helpful?  What real-world practitioner would limit their work with someone suffering from PTSD to recording “a relaxation script” and telling their client to “listen to it for an hour each day?”

Holy-Moly-Logo-Nur-Sprechblase(By the way,  it took me several hours to distill the information noted above from the official documentation–and I’m someone with a background in research, access to several online databases, a certain facility with search engines, and connections with a community of fellow researchers with whom I can consult)

On the subject of what research shows works best in the treatment of PTSD, meta-analyses of studies in which two or more approaches intended to be therapeutic are directly compared, consistently find no difference in outcome between methods–importantly, whether the treatments are designated “trauma-focused” or not.  Meanwhile, another highly specialized type of research–known as dismantling studies–fails to provide any evidence for the belief that specialized treatments cduck or rabbitontain ingredients specifically remedial to the diagnosis!  And yes, that includes the ingredient most believe essential to therapeutic success in the treatment of PTSD: exposure (1, 2).

So, if the data I cite above is accurate–and freely available–how could the committee that created the Guideline come to such dramatically different conclusions?  In particular, going to great lengths to recommend particular approaches to the exclusion of others?

Be forewarned, you may find my next statement confusing.  The summary of studies contained in the Guideline and supportive appendices is absolutely accurate.  It is the interpretation of that body of research, however, that is in question.

More than anything else, the difference between the recommendations contained in the Guideline and the evidence I cite above, is attributable to a deep and longstanding rift in the body politic of the APA.  How otherwise is one able to reconcile advocating the use of particular approaches with APA’s official policy on psychotherapy recognizing, “different forms . . . typically produce relatively similar outcomes”?

envySeeking to place the profession “on a comparable plane” with medicine, some within the organization–in particular, the leaders and membership of Division 12 (Clinical Psychology) have long sought to create a psychological formulary.  In part, their argument goes, “Since medicine creates lists of recommended treatments and procedures,  why not psychology?”

Here, the answer is simple and straightforward: because psychotherapy does not work like medicine.  As Jerome Frank observed long before the weight of evidence supported his view, effective psychological care is comprised of:

  • An emotionally-charged, confiding relationship with a helping person (e.g., a therapist);
  • A healing context or setting (e.g., clinic);
  • A rational, conceptual scheme, or myth that is congruent with the sufferer’s worldview and provides a plausible explanation for their difficulties (e.g., psychotherapy theories); and
  • Rituals and/or procedures consistent with the explanation (e.g., techniques).

The four attributes not only fit the evidence but explain why virtually all psychological approaches tested over the last 40 years, work–even those labelled pseudoscience (e.g., EMDR) by Lilienfeld, and other advocates of guidelines comprised of  “approved therapies.”  guidelines

That the profession could benefit from good guidelines goes without saying.  Healing the division within APA would be a good place to start.  Until then, encouraging practitioners to follow the organization’s own definition of evidence-based practice would suffice.  To wit, “Evidence based practice is the integration of the best available research with clinical expertise in the context of patient (sic) characteristics, culture, and preferences.”  Note the absence of any mention of specific treatment approaches.  Instead, consistent with Frank’s observations, and the preponderance of research findings, emphasis is placed on fitting care to the person.

How to do this?   The official statement continues, encouraging the “monitoring of patient (sic) progress . . . that may suggest the need to adjust the treatment.” Over the last decade, multiple systems have been developed for tracking engagement and progress in real time.  Our own system, known as Feedback Informed Treatment (FIT), is being applied by thousands of therapists around the world, with literally millions of clients. It is listed on the National Registry of Evidence based Programs and Practices.  More, when engagement and progress are tracked together with clients in real time, data to date document improvements in retention and outcome of mental health services regardless of the treatment method being used.

Until  next time,

Scott

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence

 

Filed Under: evidence-based practice, Practice Based Evidence, PTSD

More Deliberate Practice Resources…

May 30, 2017 By scottdm 1 Comment

what happenedLast week, I blogged about a free, online resource aimed at helping therapists improve their outcomes via deliberate practice.  As the web-based system was doubling as a randomized controlled trial (RCT), participants would not only be accessing a cutting-edge, evidence-based protocol but also contributing to the field’s growing knowledge in this area.

To say interest was high, doesn’t even come close.  Within 45 minutes of the first social media blast, every available spot was filled–including those on the waiting list!  Lead researchers Daryl Chow and Sharon Lu managed to open a few additional spots, and yet demand still far exceeded supply.

I soon started getting emails.  Their content was strikingly similar–like the one I received from Kathy Hardie-Williams, an MFT from Forest Grove, Oregon, “I’m interested in deliberate practice!  Are there other materials, measures, tools that I can access and start using in my practice?”

The answer is, “YES!”  Here they are:

Cycle of Excellence cover - single

Resource #1.  Written for practicing therapists, supervisors, and supervisees, this volume brings together leading researchers and supervisors to teach practical methods for using deliberate practice to improve the effectiveness of psychotherapy.

Written for practicing therapists, supervisors, and supervisees, this volume brings together leading researchers and supervisors to teach practical methods for using deliberate practice to improve the effectiveness of psychotherapy.

Twelve chapters split into four sections covering: (1) the science of expertise and professional development; (2) practical, evidence-based methods for tracking individual performance; (3) step-by-step applications for integrating deliberate practice into clinical practice and supervision; and (4) recommendations for making psychotherapist expertise development routine and expected.

“This book offers a challenge and a roadmap for addressing a fundamental issue in mental health: How can therapists improve and become experts?  Our goal,” the editors of this new volume state, ” is to bring the science of expertise to the field of mental health.  We do this by proposing a model for using the ‘Cycle of Excellence’ throughout therapists’ careers, from supervised training to independent practice.”

The book is due out June 1st.  Order today by clicking here: The Cycle of Excellence: Using Deliberate Practice to Improve Supervision and Training

Resource #2: The MyOutcomes E-Learning Platform

The folks at MyOutcomes have just added a new module on deliberate practice to their already extensive e-learning platform.  The information is cutting edge, and the production values simply fantastic.  More, MyOutcomes is offering free access to the system for the first 25 people who email to support@myoutcomes.com.  Put the words, “Responding to Scott’s Blogpost” in the subject line.  Meanwhile, here’s a taste of the course:

Resource #3:

proDLast but not least, the FIT Professional Development Intensive.  There simply is no better way to learn about deliberate practice than to attend the upcoming intensive in Chicago.  It’s the only such training available.  Together with my colleague, Tony Rousmaniere–author of the new book, Deliberate Practice for Psychotherapists: A Guide to Improving Clinical Effectiveness, we will help you develop an individualized plan for improving your effectiveness based on the latest scientific evidence on expert performance.

We’ve got a few spaces left.  Those already registered are coming from spots all around globe, so you’ll be in good company.  Click here to register today!

OK, that’s it for now.  Wishing you all the best for the Summer,

Scott D. Miller, Ph.D.

 

Filed Under: Behavioral Health, deliberate practice, evidence-based practice, excellence, Feedback, Feedback Informed Treatment - FIT, Practice Based Evidence

The Asch Effect: The Impact of Conformity, Rebelliousness, and Ignorance in Research on Psychology and Psychotherapy

December 3, 2016 By scottdm 5 Comments

asch-1
Consider the photo above.  If you ever took Psych 101, it should be familiar.   The year is 1951.  The balding man on the right is psychologist, Solomon Asch.   Gathered around the table are a bunch of undergraduates at Swarthmore College participating in a vision test.

Briefly, the procedure began with a cardboard printout displaying three lines of varying length.  A second containing a single line was then produced and participants asked to state out loud which it best matched.  Try it for yourself:
asch-2
Well, if you guessed “C,” you would have been the only one to do so, as all the other participants taking the test on that day chose “B.”  As you may recall, Asch was not really assessing vision.  He was investigating conformity.  All the participants save one were in on the experiment, instructed to choose an obviously incorrect answer in twelve out of eighteen total trials.

The results?not-me

On average, a third of the people in the experiment went along with the majority, with seventy-five percent conforming in at least one trial.

Today, practitioners face similar pressures—to go along with the assertion that some treatment approaches are more effective than others.

Regulatory bodies, including the Substance Abuse and Mental Health Services Administration in the United States, and the National Institute for Health and Care Excellence, are actually restricting services and limiting funding to approaches deemed “evidence based.”  The impact on publicly funded mental health and substance abuse treatment is massive.

So, in the spirit of Solomon Asch, consider the lines below and indicate which treatment is most effective?

asch-3
If your eyes tell you that the outcomes between competing therapeutic approaches appear similar, you are right.  Indeed, one of the most robust findings in the research literature over the last 40 years is the lack of difference in outcome between psychotherapeutic approaches.

The key to changing matters is speaking up!  In the original Asch experiments, for example, the addition of even one dissenting vote reduced conformity by 80%!   And no, you don’t have to be a researcher to have an impact.  On this score, when in a later study, a single dissenting voter wearing thick glasses—strongly suggestive of poor visual acuity—was added to the group, the likelihood of going along with the crowd was cut in half.

That said, knowing and understanding science does help.  In the 1980’s, two researchers found that engineering, mathematics, and chemistry students conformed with the errant majority in only 1 out of 396 trials!

What does the research actually say about the effectiveness of competing treatment approaches?

You can find a review in the most current review of the research in the latest issue of Psychotherapy Research–the premier outlet for studies about psychotherapy.  It’s just out and I’m pleased and honored to have been part of a dedicated and esteemed group scientists that are speaking up.  In it, we review and redo several recent meta-analyses purporting to show that one particular method is more effective than all others.  Can you guess which one?

The stakes are high, the consequences, serious.  Existing guidelines and lists of approved therapies do not correctly represent existing research about “what works” in treatment.  More, as I’ve blogged about before, they limit choice and effectiveness without improving outcome–and in certain cases, result in poorer results.  As official definitions make clear, “evidence-based practice” is NOT about applying particular approaches to specific diagnoses, but rather “integrating the best available research with clinical expertise in the context of patient characteristics, culture, and preferences” (p. 273, APA, 2006).

Read it and speak up!

Until next time,

Scott

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence
Scott D. Miller - Australian Drug and Alcohol Symposium

Filed Under: Dodo Verdict, evidence-based practice, Feedback Informed Treatment - FIT, Practice Based Evidence

Making the Impossible, Possible: The Fragile Balance

July 25, 2016 By scottdm 1 Comment

Trip-Advisor scores it # 11 out of 45 things to do Sausalito, California.  No, it not’s the iconic Golden Gate Bridge or Point Bonita Lighthouse.  Neither is it one of the fantastic local restaurants or bars.  What’s more, in what can be a fairly pricey area, this attraction won’t cost you a penny.   It’s the gravity-defying rock sculptures of local performance artist, Bill Dan.

bill dan

So impossible his work seems, most initially assume there’s a trick: magnets, hooks, cement, or pre-worked or prefab construction materials.

Dan 1

Watch for a while, get up close, and you’ll see there are no tricks or shortcuts.  Rather, Bill Dan has vision, a deep understanding of the materials he works with, and perseverance.  Three qualities that, it turns out, are essential in any implementation.

Over the last decade, I’ve had the pleasure of working with agencies and healthcare systems around the world as they work to implement Feedback-Informed Treatment (FIT).  Not long ago, FIT–that is, formally using measures of progress and the therapeutic alliance to guide care–was deemed an evidence-based practice by SAMHSA, and listed on the official NREPP website.  Research to date shows that FIT makes the impossible, possible, improving the effectiveness of behavioral health services, while simultaneously decreasing costs, deterioration and dropout rates.

Dan 2

 

Over the last decade, a number of treatment settings and healthcare systems have beaten the odds.  Together with insights gleaned from the field of Implementation Science, they are helping us understand what it takes to be successful.

One such group is Prairie Ridge, an integrated behavioral healthcare agency located in Mason City, Iowa.  Recently, I had the privilege of speaking with the clinical leadership and management team at this cutting-edge agency.

Click on the video below to listen in as they share the steps for successfully implementing FIT that have led to improved outcomes and satisfaction across an array of treatment programs, including residential, outpatient, mental health, and addictions.

Until next time,

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence
Scott D Miller Symposium bg3

P.S.: Looking for a way to learn the principles and practice of Feedback Informed Treatment?  No need to leave home.  You can learn and earn CE’s at the ICCE Fall FIT Webinar.  Register today at: https://www.eventbrite.ie/e/fall-2016-feedback-informed-treatment-webinar-series-tickets-26431099129.

ICCE Fall WEbinar

 

Filed Under: behavioral health, CDOI, evidence-based practice, Feedback Informed Treatment - FIT, Practice Based Evidence

Becoming a more effective therapist: Three evidence-based clues from research on the field’s most effective practitioners

April 15, 2015 By scottdm 2 Comments

excellence

It’s one of those secrets everyone knows, but few talk openly about: some therapists are more effective than others. Available evidence indicates that clients seen by these practitioners experience 50% more improvement, 50% less drop out, have shorter lengths of stay, and are significantly less likely to deteriorate while in care.

So, how do these top performers achieve the superior results  they do? More to the point, is there anything the rest of us can learn from them?  The answer is a resounding, “YES!” Over the last decade, researchers have started to unlock the secret to their success.

If you want to improve your effective effectives:

  • Give yourself, “The Benefit of Doubt”

Turns out, top performing therapists evince more of what researchers term, “professional self-doubt.” They are, said another way, less certain about how they work and the results they achieve than their less effective peers. To be sure, their doubt is not disabling but rather a first step, the harbinger of new learning.  As UCLA basketball coach John Wooden once quipped, “It’s what you learn after you know it all that counts.”

One sure fire way to give yourself the benefit of doubt is to measure your results. Research shows, for example, that most of us overestimate how effective we really are—on average, by 65%!  Augmenting your clinical judgement with reliable and valid feedback about when you are and are not successful will challenge you to reconsider what you long ago stopped questioning.

Assessing the outcome of your work is no longer difficult nor time-consuming. For example, the Outcome Rating Scale (ORS) takes less than a minute to administer and score and can be downloaded and used for free. More, a number of web based systems exist that not only alert you to clients “at risk” for dropping out, or experiencing a negative or null outcome from treatment, but also compute and compare your effectiveness to national norms. I reviewed two such systems in recent blog posts (1, 2).

  • Connect for Success

Research shows that 97% of the difference in outcome between therapists can be accounted for by therapist variability in the therapeutic relationship. Said another way, the single largest difference between the best and the rest is the former’s ability to connect with a broader, more complex, and diverse group of clients.

Can you think of any aspect of clinical practice that has yielded such unequivocal results?  The bottom line for those wishing to become more effective is, work on your relationship skills.

As far as which element of the relationship you might want to focus on, consider the graph below. In it, you will find the effect size associated with each. To the right of the blue bar are aspects of psychotherapy that receive the majority of professional attention in graduate school and continuing education events, and their relative contribution to outcome.

ES of Common versus Specific Factors

By the way, for the first time this summer, the ICCE is offering a single day intensive ethics training.  If you need ethics CE’s, this is the event you want to attend. The focus? The relationship.  Given the findings noted above, isn’t that the right thing to be talking about?!

Mark your calendar: August 12th, 2015. Chicago, Illinois.

ethical 2

Register early as the number of participants has been capped at 35 in order to insure an intimate, individualized experience.

  • Slow and Steady Wins the Race

In the proverbial race between Tortoises and Hares, the most effective clinicians fall squarely in the camp of the ectotherms. For them, there are no shortcuts. No fast track to success. No models that, when applied with fidelity, will lead them to treatment nirvana.

Top performing clinicians approach the subject of improving their outcomes the same way investors prepare for retirement: a little bit every day over a long period of time. Compared to average therapists these top performers spend 2.5 to 4.5 more hours per week outside of work in activities specifically designed to improve the effectiveness of their work—an activity known as, “Deliberate Practice.”

You can see how the investment in professional development compounds over time the graph below taken from a study soon to appear in the journal, Psychotherapy.

Experience and DP Graph

Of course, the quality of the return depends on the nature of the investment. So, what should you invest in? To get better, you must first identify the edge of your “realm of reliable performance”—that spot where what you normally do well begins to break down. From there, you have to develop a concrete plan, complete with small, measureable process and outcome objectives. This is often best accomplished with the help of a mentor or coach, someone who possesses the skill you need and is capable of teaching it to others. Trial, error, and review follows.

You can learn to apply the latest findings about deliberate practice to your own professional development at a special, two-day intensive this summer. Cutting edge research will be translated into highly individualized, step-by-step instructions for improving your clinical performance and effectiveness. We promise you will leave with an evidence-based plan tailored to your personal, professional development needs.

Mark your calendar: August 10-11th, 2015.

Professional Development

Given the highly individualized nature of this event, registration is limited to 20 participants. You can reserve your spot today by clicking here.

Looking forward to meeting you this Summer in Chicago!

Scott

Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence

headerMain8.png

Filed Under: Feedback Informed Treatment - FIT, Practice Based Evidence, Top Performance Tagged With: effective therapist, evidence-based research

Is Documentation Helping or Hindering Mental Health Care? Please Let me know.

November 23, 2014 By scottdm 44 Comments

Drowning in paperwork

So, how much time do you spend doing paperwork?  Assessments, progress notes, treatment plans, billing, updates, etc.–the lot?

When I asked the director of the agency I was working at last week, it took him no time to respond. “Fifty percent,” he said, then added without the slightest bit of irony, “It’s a clinic-wide goal, keeping it to 50% of work time.”

Truth is, it’s not the first time I’ve heard this figure.  Wherever I travel–whether in the U.S. or abroad–practitioners are spending more and more time “feeding the bureaucratic beast.”  Each state or federal agency, regulatory body, and payer wants a form of some kind.  Unchecked, regulation has lost touch with reality.

Just a few short years ago, the figure commonly cited was 30%.  In the last edition of The Heart and Soul of Change, published in 2009, we pointed out that in one state, “The forms needed to obtain a marriage certificate, buy a new home, lease an automobile, apply for a passport, open a bank account, and die of natural causes were assembled … altogether weighed 1.4 ounces.  By contrast, the paperwork required for enrolling a single mother in counseling to talk about difficulties her child was experiencing at school came in at 1.25 pounds” (p. 300).

Research shows that a high documentation to clinical service ratio leads to higher rates of:

  • Burnout and job dissatisfaction among clinical staff;
  • Fewer scheduled treatment appointments;
  • No shows, cancellations, and disengagement among consumers.

Some potential solutions have emerged.  “Concurrent ,” a.k.a., “collaborative documentation.”  It’s a great idea: completing assessments, treatment plans, and progress notes together with clients during rather than after the session.  We started doing this to improve transparency and engagement at the Brief Family Therapy Center in Milwaukee, Wisconsin back in the late 1980’s.  At the same time, it’s chief benefit to date seems to be that it saves time on documentation–as though filling out paperwork is an end in and of itself!

Ostensibly, the goal of paperwork and oversight procedures is to improve accountability.  In these evidence-based times, that leads me to say, “show me the data.”  Consider the wide-spread practice–mandate, in most instances–of treatment planning. Simply put, it is less science than science fiction.  Perhaps this practice improves outcomes in a galaxy far, far away but on planet Earth, supporting evidence is spare to non-existent.  Where is the evidence that any of the other documentation improves accountability, benefits consumers, or results in better outcomes?

Put bluntly, the field needs an alternative.  What practice not only insures accountability but simultaneously improves the quality and outcome of behavioral health services?  Routinely and formally seeking feedback from consumers about how they are treated and their progress.

Soliciting feedback need not be time consuming nor difficult.  Last year, two brief, easy-to-use scales were deemed “evidence-based” by  the Substance Abuse and Mental Health Services Administration (SAMHSA).  The International Center for Clinical Excellence received perfect scores for the materials, training, and quality assurance procedures it makes available for implementing the measures into routine clinical practice:

SAMHSA 1

SAMHSA 2

Then again, these two forms add to the paperwork already burdening clinicians.  The main difference?  Unlike everything else, numerous RCT’s document that using these forms increases effectiveness and efficiency while decreasing both cost and risk of deterioration.

Learn more at the official website: www.whatispcoms.com.  Better yet, join us in Chicago for our upcoming intensives in Feedback Informed Treatment and Supervision:

Advanced FIT Training (2015)FIT Supervision Training (2015)

In the meantime, would you please let me know your thoughts?  To paraphrase Goldilocks, is the amount of documentation you are required to complete, “Too much,” Too little,” or “Just about Right!”  Type in your reply below!

Filed Under: Behavioral Health, Conferences and Training, evidence-based practice, Feedback, Feedback Informed Treatment - FIT, Practice Based Evidence

The Sounds of Silence: More on Research, Researchers, and the Media

July 21, 2014 By scottdm Leave a Comment

The Sound of Silence Flauta-1

Back in April, I blogged about an article that appeared in The Guardian, one of the U.K.’s largest daily newspapers.  Citing a single study published in Denmark, the authors boldly asserted, “The claim that all forms of psychotherapy are winners has been dealt a blow.”  Sure enough, that one study comparing CBT to psychoanalysis, found that CBT resulted in superior effects in the treatment of bulimia.

As I pointed out at the time, I was surprised that such an obscure research finding would be covered by a major newspaper.  “What could be the point?”  I wondered–that is, until I saw who had written the piece.  The authors were none other than psychologist Daniel Freemen, a strong proponent of the idea that certain treatments are better than others, and his journalist brother, Jason.

Jason&Daniel-Freeman

I have to admit, I suspected an agenda was at work.  After all, scientists have learned not to depend on extraordinary findings from single studies.  Replication is key to separating fact from hopeful fiction.  In the service of this objective, I cited a truly massive study published in Clinical Psychology Review.  Using the statistically rigorous method of meta-analysis, researchers reviewed results from 53 studies of psychological treatments for eating disorders.  The result?  No difference in effect between competing therapeutic approaches–a result confirming 50 years of robust research.  Why hadn’t this particular study been cited?  After all,  it was available at the time the two brothers wrote their piece.

Fast forward six months.  Another study from Denmark has been published, this one comparing two treatments for sexual abuse.  The results?   Both treatments worked and gains were maintained at 1 year follow up.  What’s more, consistent with the much-maligned “Dodo verdict,” no differences in outcome were found between analytic and systemic treatment approaches.

So far, no article from the Freemans.

Filed Under: Practice Based Evidence Tagged With: evidence based practice

Dumb and Dumber: Research and the Media

April 2, 2014 By scottdm 1 Comment

DUMB-AND-DUMBER

“Just when I thought you couldn’t get any dumber, you go and do something like this… and totally redeem yourself!”
– Harry in Dumb & Dumber

On January 25th, my inbox began filling with emails from friends and fellow researchers around the globe.  “Have you seen the article in the Guardian?” they asked.  “What do you make of it?” others inquired, “Have you read the study the authors are talking about?  Is it true?!”  A few of the messages were snarkier, even gloating,  “Scott, research has finally proven the Dodo verdict is wrong!”

The article the emails referred to was titled, Are all psychological therapies equally effective?  Don’t ask the dodo.  The subtitle boldly announced, “The claim that all forms of psychotherapy are winners has been dealt a blow.”

Honestly, my first thought on reading the headline was, “Why is an obscure topic like the ‘Dodo verdict’ the subject of an article in a major newspaper?”  Who in their right mind–outside of researchers and small cadre of psychotherapists–would care?  What possible interest would a lengthy dissertation on the subject–including references to psychologist Saul Rozenzweig (who first coined the expression in the 1930’s) and researcher allegiance effects–hold for the average Joe or Jane reader of The Guardian.  At a minimum, it struck me as odd.

And odd it stayed, until I glanced down to see who had written the piece.  The authors were psychologist Daniel Freeman–a strong proponent of the empirically-supported treatments–and his journalist brother, Jason.

Jason&Daniel-Freeman

Briefly, advocates of EST’s hold that certain therapies are better than others in the treatment of specific disorders.  Lists of such treatments are created–for example, the NICE Guidelines–dictating which of the therapies are deemed “best.”  Far from innocuous, such lists are, in turn, used to direct public policy, including both the types of treatment offered and the reimbursement given.

Interestingly, in the article, Freeman and Freeman base their conclusion that “the dodo was wrong” on a single study.  Sure enough, that one study comparing CBT to psychoanalysis, found that CBT resulted in superior effects in the treatment of bulimia.  No other studies were mentioned to bolster this bold claim–an assertion that would effectively overturn nearly 50 years of  robust research findings documenting no difference in outcome among competing treatment approaches.

In contrast to what is popularly believed extraordinary findings from single studies are fairly common in science.  As a result, scientists have learned to require replication, by multiple investigators, working in different settings.

The media, they’re another story.  They love such studies.   The controversy generates interest, capturing readers attention.   Remember cold fusion?  In 1989, researchers Stanley Pons and Martin Fleischmann–then two of the world’s leading electrochemists–claimed that they had produced a nuclear reaction at room temperature–a finding that would, if true, not only overturn decades and decades of prior research and theory but, more importantly, revolutionize energy production.

The media went nuts.  TV and print couldn’t get enough of it.  The hope for a cheap, clean, and abundant source of energy was simply too much to ignore.  The only problem was that, in the time that followed, no one could replicate Pons and Fleischmann’s results.  No one.  While the media ran off in search of other, more tantalizing findings to report, cold fusion quietly disappeared, becoming a footnote in history.

Back to The Guardian.  Curiously, Freeman and Freeman did not mention the publication of another, truly massive study published in Clinical Psychology Review—a study available in print at the time their article appeared.  In it, the researchers used the statistically rigorous method of meta-analysis to review results from 53 studies of psychological treatments for eating disorders.  Fifty-three!  Their finding?  Confirming mountains of prior evidence: no difference in effect between competing therapeutic approaches.  NONE!

Obviously, however, such results are not likely to attract much attention.

HUIZENGA

Sadly, the same day that the article appeared in The Guardian, John R. Huizenga passed away.  Huizenga is perhaps best known as one of the physicists who helped build the atomic bomb.  Importantly, however, he was also among the first to debunk the claims about cold fusion made by Pons and Fleischman.  His real-world experience, and decades of research, made clear that the reports were a case of dumb (cold fusion) being followed by dumber (media reports about cold fusion).

“How ironic this stalwart of science died on this day,” I thought, “and how inspiring his example is of ‘good science.'”

I spent the rest of the day replying to my emails, including the link to study in Clinical Psychology Review (Smart). “Don’t believe the hype,” I advised, “stick to the data” (and smarter)!

Filed Under: Practice Based Evidence Tagged With: CBT, Clinical Psychology Review, Daniel Freeman, dodo verdict, eating disorder, Jason Freeman, Martin Fleischmann, meta-analysis, NICE, psychoanalysis, psychotherapist, psychotherapy, research, Saul Rozenzweig, Stanley Pons, the guardian

Evidence-based Practice is a Verb not a Noun

April 8, 2013 By scottdm 1 Comment

Evidence-based practice (EBP).  What is it?  Take a look at the graphic above.  According to American Psychological Association and the Institute of Medicine, there are three components: (1) the best evidence; in combination with (2) individual clinical expertise; and consistent with (3) patient values and expectations.  Said another way, EBP is a verb.  Why then do so many treat it as a noun, continually linking the expression to the use of specific treatment approaches?  As just one example, check out guidelines published for the treatment of people with PTSD by the National Institute for Clinical Excellence (NICE)–the U.K.’s equivalent to the U.S. Substance Abuse and Mental Health Services Administration (SAMHSA).  Despite the above noted definition, and the lack of evidence favoring one treatment over another, the NICE equates EBP with the use of specific treatment approaches and boldly recommends certain methods over others.

Not long ago, ICCE Senior Associate, and U.K.-based researcher and clinician, Bill Andrews, addressed the problems with the guidelines in a presentation to an audience of British practitioners.  He not only addresses the inconsistent use of the term, evidence-based practice, in the development of guidelines by governing bodies but also the actual research on PTSD.  After watching the clip, take some time to review the articles assembled below, which Bill cites during his presentation.  The main point here is that clinicians need not be afraid of EBP.  Instead, they need to insist that leaders and officials stick to the stated definition–a definition I’m perfectly content to live with mas are most practitioners I meet.  To wit, know what the evidence says “works,” use my expertise to translate such findings into practices that fit with the values, preferences, and expectations of the individual consumers I treat.

Click here to read the meta-analysis that started it all.  Don’t stop there, however, make sure and read the response to that study written by proponents of the NICE guideliness.  You’ll be completely up-to-date if you finish with our response to that critique.

Filed Under: Practice Based Evidence Tagged With: American Psychological Association, evidence based practice, Institute of Medicine, NICE, NREPP, ptst, SAMHSA

The Revolution in Swedish Mental Health Services: UPDATE on the CBT Monopoly

April 5, 2013 By scottdm Leave a Comment

No blogpost I’ve ever published received the amount of attention as the one on May 13th, 2012 detailing changes to Swedish Mental Health practice.  At the time, I reported about research results showing that the massive investment of resources in training therapists in CBT had not translated into improved outcomes or efficiency in the treatment of people with depression and anxiety.  In short, the public experiment of limiting training and treatment to so called, “evidence-based methods” had failed to produce tangible results.  The findings generated publications in Swedish journals as merited commentary in Swedish newspapers and on the radio.

I promised to keep people updated if and when research became available in languages other than Swedish.  This week, the journal Psychotherapy, published an article comparing outcomes of three different treatment approaches, including CBT, psychodynamic, and integrative-eclectic psychotherapy.  Spanning a three year period, the results gathered at 13 outpatient clinics, found that psychotherapy was remarkably effective regardless of the type of treatment offered!  Read the study yourself and then ask yourself: when will a simpler, less expensive, and more client-centered approach to insuring effective and efficient behavioral health services be adopted?  Routinely seeking feedback from consumers regarding the process and outcome of care provides such an alternative.  The failure to find evidence that adopting specific models for specific disorders improves outcomes indicates the time has come.  You can learn more about feedback-informed treatment (FIT), a practice recently designed “evidence-based” by the Substance Abuse and Mental Health Services Administration (SAMHSA), by visiting the International Center for Clinical Excellence web-based community or attending an upcoming training with me in Chicago or on the road.

  • Learn more about what is going on in Sweden by reading:

Everyday evidence outcomes of psychotherapies in swedish public health services (psychotherapy werbart et al 2013)

  • Here’s one additional reference for those of you who read Swedish.  It’s the official summary of the results from the study that started this entire thread:
Delrapport ii slutversion

Filed Under: Practice Based Evidence Tagged With: CBT, evidence based practice, ors, outcome rating scale, psychotherapy, session rating scale, srs, sweden

National Psychotherapy Day: A Recognition, Celebration, and Call for Action

September 24, 2012 By scottdm Leave a Comment

With all the challenges facing the profession, it is important to highlight people and organizations that are working hard to make a difference.  On that note, tomorrow, Tuesday the 25th of September 2012 is the very first National Psychotherapy Day.  Having a day of unified, active promotion of psychotherapy is the brain child of psychologist Ryan Howes.  At his side is the Psychotherapy Foundation (PF), a nonprofit foundation, dedicated to promoting the therapeutic relationship as an “effective, long-lasting, collaborative approach” to resolving emotional, behavioral, and relational problems.  What’s not to like?  Dr. Howes and the PF are encouraging people who have seen a therapist to talk or blog about their experience.  They are calling on therapists to commit to sharing research documenting the effectiveness of psychotherapy with the public (write a letter to the editor of your local paper, offer to do an interview, give a brief presentation at the Chamber of Commerce).

Surveys show that the two primary barriers to seeking the help of a therapist are: (1) cost of the service (cited by 81%); and (2) lack of confidence in the outcome of therapy (78%).  Of these two barriers, the first is entirely understandable.  Times are tough and treatment costs money.  It is for these this reason that Dr. Howe and the PF are asking all who participate in the day to support their local, low-fee counseling centers in whatever way possible.

The second barrier is more troubling and, frankly, difficult to understand and address.  Research overwhelmingly supports the efficacy of psychological treatment.  Indeed, the American Psychological Association issued a rare, formal resolution this last month recognizing the effectiveness of psychotherapy!  Listen to the language:

  • Whereas the effects of psychotherapy …are widely accepted to be significant and large;
  • Whereas the results of psychotherapy tend to last long and be less liely to equire additional treatment courses than psychopharmacological treatments;
  • Whereas comparisons of different forms of psychotherapy most often result in relatively nonsignificant difference, and contextual and relationship factors (not captured by a patient’s diagnosis or by the therapists use of a specific psychotherapy) mediate or moderate outcomes;
  • Whereas the best research evidence conclusively shows that individual, group, and couple/family psychotherapy are effective for a broad range of…problems with children, adolescents, adults, and older adults;
  • THEREFORE be it resolved that, as a healing practice and professional service, psychotherapy is effective and highly cost effective…and should be included in the health care system as an established evidence-based practice.

Strong words, right?  Even so, it’s very clear that the public’s lingering doubts about effectiveness will require than a proclamation.  It is for this reason that Dr. Howes and PF are asking all those currently in care to provide constructive feedback to their therapist.  Therapists, in turn, are encouraged to seek and respond to feedback from their clients.   As reviewed here on this blog, numerous studies document the positive impact that routine feedback from clients has on retention and outcome of service.  Free evidence-based tools are available for download from this website for soliciting formal feedback from consumers.  Plus, the International Center for Clinical Excellence web-based community–the largest group of clinicians and researchers dedicated to improving the quality and outcome of psychotherapy via the use of ongoing feedback–stands ready and willing to be of support.

So, why the turquoise?  Well, its’ the official color of National Psychotherapy Day.  To show your support, Dr. Howes and PF are asking all to wear something with that color tomorrow.

Filed Under: behavioral health, Practice Based Evidence Tagged With: brief therapy, cdoi, icce, randomized clinical trial

Revolution in Swedish Mental Health Practice: The Cognitive Behavioral Therapy Monopoly Gives Way

May 13, 2012 By scottdm 34 Comments

Sunday, May 13th, 2012
Arlanda Airport, Sweden

Over the last decade, Sweden, like most Western countries, embraced the call for “evidence-based practice.”  Socialstyrelsen, the country’s National Board of Health and Welfare, developed and disseminated a set of guidelines (“riktlinger”) for mental health practice.  Topping the list of methods was, not surprisingly, cognitive-behavioral therapy. 

The Swedish State took the list seriously, restricting payment for training of clinicians and treatment of clients to cognitive behavioral methods.  In the last three years, a billion Swedish crowns were spent on training clinicians in CBT.  Another billion was spent on providing CBT to people with diagnoses of depression and anxiety.  No funding was provided for training or treatment in other methods. 

The State’s motives were pure: use the best methods to decrease the number of people who become disabled as result of depression and anxiety.  Like other countries, the percentage of people in Sweden who exit the work force and draw disability pensions has increased dramatically.  As a result, costs skyrocketed.  Even more troubling, far too many became permanently disabled. 

The solution?  Identify methods which have scientific support, or what some called, “evidence-based practice.” The result?  Despite substantial evidence that all methods work equally well, CBT became the treatment of choice throughout the country.  In point of fact, CBT became the only choice.

As noted above, Sweden is not alone in embracing practice guidelines.  The U.K. and U.S. have charted similar paths, as have many professional organizations.  Indeed, the American Psychological Association has now resurrected its plan to develop and disseminate a series of guidelines advocating specific treatments for specific disorders.  Earlier efforts by Division 12 (“Clinical Psychology”) met with resistance from the general membership as well as scientists who pointed to the lack of evidence for differential effectiveness among treatment approaches. 

Perhaps APA and other countries can learn from Sweden’s experience.  The latest issue of Socionomen, the official journal for Swedish social workers, reported the results of the government’s two billion Swedish crown investment in CBT.  The widespread adoption of the method has had no effect whatsoever on the outcome of people disabled by depression and anxiety.  Moreover, a significant number of people who were not disabled at the time they were treated with CBT became disabled, costing the government an additional one billion Swedish crowns.  Finally, nearly a quarter of those who started treatment, dropped out, costing an additional 340 million!

In sum, billions training therapists in and treating clients with CBT to little or no effect.  

Since the publication of Escape from Babel in 1995, my colleagues and I at the International Center for Clinical Excellence have gathered, summarized, published, and taught about research documenting little or no difference in outcome between treatment approaches.  All approaches worked about equally well, we argued, suggesting that efforts to identify specific approaches for specific psychiatric diagnoses were a waste of precious time and resources.  We made the same argument, citing volumes of research in two editions of The Heart and Soul of Change.

Yesterday, I presented at Psykoterapi Mässan, the country’s largest free-standing mental health conference.  As I have on previous visits, I talked about “what works” in behavioral health, highlighting data documenting that the focus of care should shift away from treatment model and technique, focusing instead on tailoring services to the individual client via ongoing measurement and feedback.  My colleague and co-author, Bruce Wampold had been in the country a month or so before singing the same tune.

One thing about Sweden:  the country takes data seriously.  As I sat down this morning to eat breakfast at the home of my long-time Swedish friend, Gunnar Lindfeldt, the newscaster announced on the radio that Socialstyrelsen had officially decided to end the CBT monopoly (listen here).  The experiment had failed.  To be helped, people must have a choice. 

“What have we learned?” Rolf Holmqvist asks in Socionomen, “Treatment works…at the same time, we have the possibility of exploring…new perspectives.  First, getting feedback during treatment…taking direction from the patient at every session while also tracking progress and the development of the therapeutic relationship!”

“Precis,” (exactly) my friend Gunnar said. 

And, as readers of my blog know, using the best evidence, informed by clients’ preferences and ongoing monitoring of progress and alliance is evidence-based practice.  However the concept ever got translated into creating lists of preferred treatment is anyone’s guess and, now, unimportant.  Time to move forward.  The challenge ahead is helping practitioners learn to integrate client feedback into care—and here, Sweden is leading the way.

“Skål Sverige!”

Filed Under: Behavioral Health, evidence-based practice, Feedback Informed Treatment - FIT, Practice Based Evidence Tagged With: CBG, continuing education, evidence based practice, icce, Socialstyrelsen, sweden

Is the "Summer of Love" Over? Positive Publication Bias Plagues Pharmaceutical Research

March 27, 2012 By scottdm Leave a Comment


Evidence-based practice is only as good as the available “evidence”–and on this subject, research points to a continuing problem with both the methodology and type of studies that make it into the professional literature.  Last week, PloS Medicine, a peer-reviewed, open access journal of the Public Library of Science, published a study showing a positive publication bias in research on so-called atypical antipsychotic drugs.  In comparing articles appearing in journals to the FDA database, researchers found that almost all postive studies were published while clinical trials with negative or questionable results were not or–and get this–were published as having positive results!

Not long ago, similar yet stronger results appeared in the same journal on anti-depressants.  Again, in a comparison with the FDA registry, researchers found all postive studies were published while clinical trials with negative or questionable results were not or–and get this–were published as having positive results!  The problem is far from insignificant.  Indeed, a staggering 46% of studies with negative results were not published or published but reported as positive.

Maybe the “summer of love” is finally over for the field and broader American public.  Today’s Chicago Tribune has a story by Kate Kelland and Ben Hirschler reporting data about sagging sales of anti-depressants and multiple failures to bring new, “more effective” drug therapies to market.  Taken together, robust placebo effects, the FDA mandate to list all trials (positive and negative), and an emphasis in research on conducting fair comparisons (e.g., comparing any new “products” to existing ones) make claims about “new and improved” effectiveness challenging.

Still one sees ads on TV making claims about the biological basis of depression–the so called, “biochemical imbalance.”  Perhaps this explains why a recent study of Medicaid clients found that costs of treating depression rose by 30% over the last decade while the outcomes did not improve at all during the same period.  The cause for the rise in costs?    Increased use of psychiatric drugs–in particular, anti-psychotics in cases of depression.

“It’s a great time for brain science, but at the same time a poor time for drug discovery for brain disorders,” says David Nutt, professor of neuropsychopharmacology, cited in the Chicago Tribune, “That’s an amazing paradox which we need to do something about.”

Here’s an idea: how about not assuming that problems in living are reduceable to brain chemistry?   That the direction of causality for much of what ails people is not brain to behavior but perhaps behavior to brain?  On this note, it is sad to note that while the percentage of clients prescribed drugs rose from 81 to 87%–with no improvement in effect–the number of those receiving psychotherapy dropped from 57 to 38%.

Here’s what we know about psychotherapy: it works and it has a far less troublesome side effect profile than psychotropic drugs.  No warnings needed for dry mouth, dizziness, blood and liver problems, or sexual dysfunction.  The time has come to get over the collective 1960’s delusion of better living through chemistry.

Filed Under: Practice Based Evidence Tagged With: behavioral health, continuing education, depression, evidence based practice, icce, Medicaid, mental health, psychotherapy

Goodbye Mr. & Ms. Know-it-All: Redefining Competence in the Era of Increasing Complexity

February 12, 2012 By scottdm 3 Comments

Every day behavioral health professionals make hundreds of decisions.  As experts in the field, they meet and work successfully with diverse clients presenting an array of different difficulties.  Available evidence indicates that the average person who receives care is better off than 80% of those with similar problems that do not.  Outcomes in mental health are on par or better than most medical treatments and, crucially, have far few side effects!  Psychotherapy, for example, is equal in effect to coronary artery bypass surgery and three times more effective than flouride for cavities.

Not all the news is good, however.  Drop out rates run around 25% or higher.  Said another way, clinicians do great work with the people who stay.  Unfortunately, many do not, resulting in increased costs and lost opportunities.  Another problem is that therapists, the data indicate, are not particularly adept at identifying clients at risk for dropping out or deterioration.  For decades, research has has shown that approximately 10% of people worsen while in treatment.  Practitioners, despite what they may believe, are none the wiser.  Finally, it turns out that a small percentage (between 10-20%) of people in care account for lion’s share of expenses in behavioral health service delivery (In case you are wondering, roughly the same figures apply in the field of medicine).  Such people continue in care for long periods, often receiving an escalating and complicated array of services, without relief.  At the same time, clinician caseloads and agency waiting lists grow.

What can be done?

At one time, being a professional meant that one possessed the knowledge, training, and skills to deliver the right services to the right people for the right problem in a consistent, correct, and safe manner.  To that end, training requirements–including schooling, certification, and continuing professional development–expanded, exponentially so.  Today’s behavioral health professionals spend more time training and are more highly specialized than ever before.  And yet, the above noted problems persist.

Some call for more training, others for increasing standardization of treatment approaches, many for more rigorous licensing and accreditation standards.  The emphasis on “empirically supported treatments”–specific methods for specific diagnoses–typify this approach.  However, relying as these solutions do on an antiquated view of professional knowledge and behavior, each is doomed to fail.

In an earlier era, professionals were “masters of their domain.”  Trained and skillful, the clinician diagnosed, developed a plan for treatment, then executed, evaluated, and tailored services to maximize the benefit to the individual client.  Such a view assumes that problems are either simple or complicated, puzzles that are solvable if the process is broken down into a series of steps.  Unfortunately, the shortcomings in behavioral health outcomes noted above (drop out rates, failure to identify deterioration and lack of progress) appear to be problems that are not so much simple or complicated but complex in nature.  In such instances, outcomes are remain uncertain throughout the process.  Getting things right is less about following the formula than continually making adjustments, as “what works” with one person or situation may not easily transfer to another time or place.  Managing such complexity requires a change of heart and direction, a new professional identity.  One in which the playing field between providers and clients is leveled, where power is moved to the center of the dyad and shared, where ongoing client feedback takes precedence over theory and protocol.

In his delightful and engaging book, The Checklist Manifesto, physician and surgeon Atul Gawande provides numerous examples in medicine, air travel, computer programming, and construction where simple feedback tools have resulted in dramatic improvements in efficiency, effectiveness, and safety.  The dramatic decrease in airplane related disasters over the last three decades is one example among many–all due to the introduction of simple feedback tools.  Research in the field of behavioral health documents similar improvements.  Multiple studies document that routinely soliciting feedback regarding progress and the alliance results in significantly improved effectiveness, lower drop out rates, and less client deterioration–and all this while decreasing the cost of service delivery.  The research and tools are described in detail in a new series of treatment manuals produced by the members and associates of the International Center for Clinical Excellence–six simple, straightforward, how-to guidebooks covering everything from the empirical foundations, administration and interpretation of feedback tools, to implementation in diverse practice settings.  Importantly, the ICCE Manuals on Feedback Informed Treatment (FIT) are not a recipe or cookbook.  They will teach not to you how to do treatment.  You will learn, however, skills for managing the increasingly complex nature of modern behavioral health practice.

In the meantime, here’s a fantastic video of Dr. Gawande on the subject.  Use the cursor to skip ahead to the 2:18 mark:

Filed Under: Behavioral Health, Feedback Informed Treatment - FIT, Practice Based Evidence Tagged With: Atul Gawande, behavioral health, feedback informed treatment, icce, The Checklist Manifesto

Yes, More Evidence: Spanish version of the ORS Validated by Chilean Researchers

June 16, 2011 By scottdm Leave a Comment

Last week, Chile.  This week, Perth, Australia.  Yesterday, I landed in Sydney following a 30 hour flight from the United States.  I managed to catch the last flight out to Perth before all air travel was grounded due to another ash clound–this time coming from Chile!  I say “another” as just over a year ago, I was trapped behind the cloud of ash from the Icelandic eruption!  So far so good.  Today, I’ll spend the day talking about “excellence” in behavioral healthcare.

Before heading out to teach for the day, I wanted to upload a report from a recent research project conducted in Chile investigating the statistical properties of the ORS.  I’ve attached the report here so you can read for yourself.  That said, let me present the highlights:

  • The spanish version of the ORS is reliable (alpha coefficients .90-.95).
  • The spanish version of the ORS shows good construct and convergent validity (correlations with the OQ45 .5, .58).
  • The spanish version of the ORS is sensitive to change in a treated population.

The authors of the report that was presented at the Society for Psychotherapy Research meeting conclude, “The ORS is a valid instrument to be used with the Chilean population.”

As asked in my blogpost last week, “how much more evidence is needed?”  Now, more than ever, clinicians needs simple, valid, reliable, and feasible tools for evaluating the process and outcome of behavioral healthcare.  The ORS and SRS FITS the bill!

Filed Under: FIT, PCOMS, Practice Based Evidence Tagged With: behavioral health, cdoi, Chile, evidence based practice, mental health, ors, outcome rating scale, session rating scale, srs

The War on Unhappiness Heats Up

November 24, 2010 By scottdm Leave a Comment

Back in September, I blogged about an article by Gary Greenberg published in the August issue of Harper‘s magazine that took aim at the “helping profession.”   He cast a critical eye on the history of the field, it’s colorful characters, constantly shifting theoretical landscape, and claims and counterclaims regarding “best practice.”   Several paragraphs were devoted to my own work; specifically, research documenting the relatively inconsequential role that particular treatment approaches play in successful treatment and the importance of using ongoing feedback to inform and improve mental health services.

Just this last week, while I was overseas teaching in Romania (more on that trip soon), I received an email from Dr. Dave of ShrinkRapRadio who felt the piece by Greenberg was unfair to the field in general and a mischaracterization of the work by many of the clinicians cited in the article, including me.  “I’ve got a blog on the Psychology Today website and I’m planning to take him to task a bit,” he wrote.

If you have not had a chance to read the Greenberg article, you can find it on my original blogpost.  It’s a must read, really.  As I said then, whatever your opinion about the present state of practice, “Greenberg’s review of current and historical trends is sobering to say the least–challenging mental health professionals to look in the mirror and question what we really know for certain–and a must read for any practitioner hoping to survive and thrive in the current practice environment.”  Then, take a moment and read Dr. Dave’s response.  With his permission, I’ve posted it below!

  

Popping The Happiness Bubble: The Backlash Against Positive Psychology

Readers will recall that in Part 1, I suggested that a backlash against the ebullience of the positive psychology movement was probably inevitable. The most visible sign of that rebellion was last year’s best-selling book by Barbara Ehrenreich, Bright-Sided: How The Relentless Promotion of Positive Thinking Has Undermined America. While I found myself in agreement with much of her appraisal of American culture and our historical fascination with “positive thinking,” I thought her critique of positive psychology fell short by equating positive psychology to “positive thinking.” It also seemed to me that she failed to recognize that a huge body of research conducted by an army of independent researchers is emerging on a very diverse range of topics, which have been subsumed under the general heading of positive psychology. And, finally, much of her argument was based on an ad hominem attack on Martin Seligman.

I found further evidence of this backlash in the lead article in the October 2010 issue of Harper’s by psychotherapist Gary Greenberg, “The War on Unhappiness: Goodbye Freud, Hello Positive Thinking.” Greenberg is the author of Manufacturing Depression, a book that came out earlier this year. In addition, he is a prolific writer who has published articles that bridge science, politics, and ethics in a number of leading magazines. So he’s got great credentials both as a psychologist and a writer. Yet, I found this particular article unsatisfying. At least, that was my reaction upon first reading. As I later read it a second time to write about it here, I got a clearer sense of what he was up to and found myself in substantial agreement with his overall thrust.

The stimulus for Greenberg’s piece appears to have been his attendance at the annual Evolution of Psychotherapy Conference in Anaheim earlier this year. He seems to take a pretty dyspeptic view of the whole event: “Wandering the conference, I am acquainted, or reacquainted, with Cognitive Behavioral Therapy, Ericksonian Hypnosis, Emotionally Focused Therapy, Focusing, Buddhist Psychology, Therapist Sculpting, Facilitating Gene Expression, and Meditative methods.” A forty-year veteran of the California personal-growth/therapy scene, myself, it’s easy to develop a jaundiced eye over time as a panoply of approaches come and go. Yet, I have to say my own view, as a result of over 300 podcast interviews with psychologists across a broad spectrum of orientations, is there is more of a developing consensus and that the differences between many approaches are relatively minor.

By contrast, Greenberg seems to go into despair.

As I say, it took two readings of Greenberg’s article to really get the overall sweep. On first reading, it seems to be a bit of a meander, beginning with some slighting anecdotes about Freud. Then we’re on to the Anaheim conference and some handwringing about the seeming tower of Babel created by the profusion of therapeutic approaches. This segues into a discussion of Rozenzwig’s 1936 “Dodo Bird Effect” which asserts that therapeutic orientation doesn’t matter because all orientations work. As the Dodo pronounces in Alice in Wonderland, “Everyone has won and all must have prizes.” According to Greenberg, the Dodo Bird Effect has been borne out in subsequent studies and the requisite common ingredient for therapeutic success is faith, both the client’s and the therapist’s.

Greenberg goes on to describe several of the presentations, most notably by Otto Kernberg, Scott D. Miller, David Burns, and Martin Seligman. Part of what put me off about this article on my first reading is that I have conducted in-depth interviews with the first three of these gentlemen and I would not have recognized them from Greenberg’s somewhat muddled account.

Otto Kernberg, MD, one of the grand old men of psychoanalysis, is characterized as intoning “the old mumbo jumbo about the Almost Untreatable Narcissistic Patient…” In my opinion, this really slights his lifetime commitment to research, his many contributions to object relations theory, and his role as Director of The Institute for Personality Disorders at the Cornell Medical Center.  In my interview with Dr. Kernberg, I was struck by the flexibility of this octogenerian to incorporate the findings of neuroscience, genetics, and even cognitive behavioral therapy in this thinking.

Greenberg seems to use Dr. Scott D. Miller’s research as supporting the Dodo Bird effect. I attended a daylong workshop with Scott Miller a few years ago and it was one of the best presentations I’ve ever seen. I also interviewed him for one of my podcasts. The key takeaway for me from Scott Miller’s work is that the Dodo Bird effect shows up only when therapeutic effectiveness is averaged across therapists. That is, on average, all psychotherapies are moderately effective. However, Miller reports that not all therapists are equally effective and that, if you look at therapists who are consistently rated as effective by their clients vs. therapists who are consistently rated as ineffective, then therapy emerges as a highly worthwhile enterprise.

As Miller said in my interview with him, “If the consumer is able to feed back information to the system about their progress, whether or not progress is being made, those two things together can improve outcomes by as much as 65%.”

As I say, I had difficulty recognizing Miller in Greenberg’s account. Evidently, Greenberg is critical of Miller having developed a standardized set of rating scales for clients to provide feedback to their therapists. Greenberg sees these scales as playing into the hands of managed care and the trend towards “manualized” therapies. However, in my interview with Miller, he is very clearly critical of managed care, at least in terms of their emphasis on particular treatments for particular diagnostic categories. As Miller said in his interview with me, “If there were inter-rater reliability that would be one thing; the major problem with the DSM is that is lacks validity, however. That these groupings of symptoms actually mean anything… and that data is completely lacking… We are clustering symptoms together much the way medicine did in the medieval period: this is the way we treated people and thought about people when we talked about them being phlegmatic for example; or the humors that they had. Essentially they were categorizing illnesses based on clusters of symptoms.”

I also had difficulty recognizing Stanford psychiatry professor, David Burns, from Greenberg’s summary of the session he attended with Burns.  In short, Greenberg portrays Burns, who has developed a Therapist’s Toolkit inventory as wishing to replace “open-ended conversation with a five-item test… to take an X-ray of our inner lives.” This runs counter to my experience of Burns who, for example, in my interview with Dr. Burns about his cognitive therapy approach to couples work said, “…cognitive therapy has become probably the most widely practiced and researched form of psychotherapy in the world. But I really don’t consider myself a cognitive therapist or any other school of therapy; I’m in favor of tools, not schools of therapy. I think all the schools of therapy have had important discoveries and important angles, but the problem is they are headed up by gurus who push too hard trying to say cognitive therapy is the answer to everything, or rational emotive therapy is the answer to everything, or psychoanalysis is the answer to everything. And that is reductionism, and kind of foolish thinking to my point of view.” This hardly sounds like someone who thinks he’s invented a paper-and-pencil test that will be the end-all of psychotherapy.

And then Greenberg goes on to skewer positive psychology, which is what drew me to his article in the first place. After all, the title “The War on Unhappiness” seems to promise that. Like Ehrenreich, however, Greenberg’s critique is largely an ad hominem attack on Seligman. For example, referring to his earlier work subjecting dogs to electric shock boxes to study learned helplessness, Greenberg characterizes Seligman as, “More curious about dogs than about the people who tortured them…” He goes on to recount Seligman’s presentation to the CIA on learned helplessness which became the basis for enhanced “interrogation” techniques in Iraq. Now, we are told Seligman is working with the U.S. Army to teach resilience to our troops. In Greenberg’s view, Seligman would have us going his dogs one better by “thriving on the shocks that come our way rather than merely learning to escape them.”

So, it turns out that Greenberg’s attack on positive psychology is rather incidental to his larger concern which turns out to be that clinical psychology has sold its soul to the evidence-based, managed-care lobby in order to feed at the trough of medical reimbursement.

Greenberg’s article is a circular ramble that begins with slighting references to Freud and psychoanalysis and then ends with Freud as the champion of doubt.

It took me two readings to see that Greenberg is essentially using Miller, Burns, and Seligman as foils to attack smug certainty and blind optimism, the enemies of doubt. Of himself, Greenberg concludes, “I’m wondering now why I’ve always put such faith in doubt itself, or, conversely, what it is about certainty that attracts me so much, that I have spent twenty-seven years, thousands of hours, millions of other people’s dollars to repel it.”

Greenberg evidently values the darker side, the questions, the unknown, the mystery. “Even if Freud could not have anticipated the particulars – the therapists-turned-bureaucrats, the gleaming prepackaged stories, the trauma-eating soldiers-he might have deduced that a country dedicated in its infancy to the pursuit of happiness would grow up to make it a compulsion. He might have figured that American ingenuity would soon, maybe within a century, find a way to turn his gloomy appraisal of humanity into a psychology of winners.”

I think I’m in agreement with at least some of Greenberg’s larger argument. My fear, however, is that the general reader will come away with the impression that psychotherapists don’t know what they are doing and that the whole enterprise is a waste of time and money. That would be too bad. Both because I don’t think it’s true and I don’t think Greenberg does either.

I encourage you to find Greenberg’s article and to post your own reactions here in the comments area.

I had planned to stake out my own position on positive psychology in response to the critiques of Ehrenreich and Greenberg. It’s looking like there may need to be a Part 3. Stay tuned!

Filed Under: Practice Based Evidence Tagged With: Barbara Ehrenreich, evidence based practice, gary greenberg, healthcare, Manufacturing Depression, mental health, psychology today

Pushing the Research Envelope: Getting Researchers to Conduct Clinically Meaningful Research

November 5, 2010 By scottdm Leave a Comment

ACE Health Innovations - Developers of openFIT and mFIT

At the recent ACE conference, I had the pleasure of learning from the world’s leading experts on expertise and top performance.  Equally stimulating were conversations in the hallways between presentations with clinicians, policy makers, and researchers attending the event.  One of those was Bill Andrews, the director of the HGI Practice Research Network in the UK who work over the last 3+ years has focused on clinicians whose outcomes consistently fall in the top quartile of effectiveness.

In this brief interview, Bill talks about the “new direction” his research on top performing clinicians is taking.  He is truly “pushing the research envelope, challenging the field to move beyond the simplistic randomized clinical trials comparing different treatment packages.  Take a look:

Filed Under: Behavioral Health, Conferences and Training, evidence-based practice, Feedback Informed Treatment - FIT, Practice Based Evidence Tagged With: behavioral health, cdoi, continuing education, evidence based practice, icce

What is "Best Practice?"

October 20, 2010 By scottdm Leave a Comment

You have to admit the phrase “best practice” is the buzzword of late. Graduate school training programs, professional continuing education events, policy and practice guidelines, and funding decisions are tied in some form or another to the concept. So, what exactly is it? At the State and Federal level, lists of so-called “evidence-based” interventions have been assembled and are being disseminated. In lockstep, as I reviewed recently, are groups like NICE. Their message is simple and straightforward: best practice is about applying specific treatments to specific disorders.
Admittedly, the message has a certain “common sense” appeal.    The problem, of course, is that behavioral health interventions are not the psychological equivalent of penicillin. In addition to the numerous studies highlighted on this blog documenting the failure of the “specific treatments for specific disorders” perspective, consider research published in the Spring 2010 edition of the Journal of Counseling and Development by Scott Nyman, Mark Nafziger, and Timothy Smith. Briefly, the authors examined outcome data to “evaluate treatment effectiveness across counselor training level [and found] no significant outcome differences between professional staff and …. interns, and practicum students” (p. 204). Although the researchers are careful to make all the customary prevarications, the conclusion—especially when combined with years of similar findings reported in the literature– is difficult to escape: counseling and psychotherapy are highly regulated activities requiring years of expensive professional training that ultimately fails to make the practitioner any better than they were at the outset.
What gives? Truth is, the popular conceptualization of “best practice” as a “specific treatment for a specific disorder” is hopelessly outdated. In a report few have read, the American Psychological Association (following the lead of the Institute of Medicine) redefined evidence-based, or best practice, as, “the integration of the best available research with clinical expertise in the context of patient characteristics, culture, and preferences.” Regarding the phrase “clinical expertise” in this definition, the Task Force stated, “Clinical expertise…entails the monitoring of patient progress (and of changes in the patient’s circumstances—e.g., job loss, major illness) that may suggest the need to adjust the treatment (Lambert, Bergin, & Garfield, 2004a). If progress is not proceeding adequately, the psychologist alters or addresses problematic aspects of the treatment (e.g., problems in the therapeutic relationship or in the implementation of the goals of the treatment) as appropriate” (p. 273; emphasis included in the original text).
Said another way, instead of choosing the “specific treatment for the specific disorder” from a list of approved treatments, best practice is:
·         Integrating the best evidence into ongoing clinical practice;
·         Tailoring services to the consumer’s characteristics, culture, and preferences;
·         Formal, ongoing, real-time monitoring of progress and the therapeutic relationship.
In sum, best practice is Feedback Informed Treatment (FIT)—the vision of the International Center for Clinical Excellence. And right now, clinicians, researchers and policy makers are learning, sharing, and discussion implementing FIT in treatment settings around the globe on the ICCE web-based community.
Word is getting out. As just one example, consider Accreditation Canada, which recently identified FIT as a “leading practice” for use in behavioral health services. According to the website, leading practices are defined as “creative, evidence-based innovations [that] are commendable examples of high quality leadership and service delivery.” The accreditation body identified FIT as a “simple, measurable, effective, and feasible outcome-based accountability process,” stating that the approach is a model for the rest of the country! You can read the entire report here.
How exactly did this happen? Put bluntly, people and hard work. ICCE senior associates and certified trainers, Rob Axsen and Cynthia Maeschalck, with the support and backing of Vancouver Coast Health, worked tirelessly over the last 5 years both implementing and working to gain recognition for FIT. Similar recognition is taking place in the United States, Denmark, Sweden, England, and Norway.
You can help. Next time someone—be it colleague, trainer, or researcher—equates “best practice” with using a particular model or list of “approved treatment approaches” share the real, official, “approved” definition noted above.  Second, join Rob, Cynthia, and the hundreds of other practitioners, researchers, and policy makers on the ICCE helping to reshape the behavioral health practice worldwide.

Filed Under: Behavioral Health, evidence-based practice, ICCE, Practice Based Evidence Tagged With: Accreditation Canada, American Psychological Association (APA), cdoi, Cochrane Review, evidence based practice, icce, NICE

What Works in the Treatment of Post Traumatic Stress Disorder? The Definitive Study

September 15, 2010 By scottdm 1 Comment

What works in the treatment of people with post-traumatic stress?  The influential Cochrane Collaboration–an “independent network of people” whose self-professed mission is to help “healthcare providers, policy makers, patients, their advocates and carers, make well-informed decisions, concludes that, “non trauma-focused psychological treatments [do] not reduce PTSD symptoms as significantly…as individual trauma focused cognitive-behavioral therapy (TFCBT), eye movement desensitization and reprocessing, stress mamangement and group TFCBT.”  The same conclusion was reached by the National Institute for Health and Clinical Excellence (or NICE) in the United Kingdom which has developed and disseminated practice guidelines that unequivocally state that , “all people with PTSD should be offered a course of trauma focused psychological treatment (TFCBT) or eye movement desensitization and reprocessing (EMDR).”  And they mean all: adults and kids, young and old.  Little room for left for interpretation here.  No thinking is required.  Like the old Nike ad, you should: “Just do it.”

Wait a minute though…what do the data say? Apparently, the NICE and Cochrane recommendations are not based on, well…the evidence–at least, that is, the latest meta-analytic research!  Meta-analysis, you will recall, is a procedure for aggregating results from similar studies in order to test a hypothesis, such as, “are certain approaches for the treatment of post traumatic stress more effective than others?”  A year ago, I blogged about the publication of a meta-analysis by Benish, Imel, & Wampold which clearly showed that there was no difference in outcome between treatments for PTSD and that the designation of some therapies as “trauma-focused” was devoid of empirical support, a fiction.

So, how to account for the differences?  In a word, allegiance.  Although written by scientists, so-called “scholarly” reviews of the literature and “consensus panel” opinions inevitably reflect the values, beliefs, and theoretical predilections of the authors.  NICE guidelines, for example, read like a well planned advertising campaign for single psychotherapeutic modality: CBT.  Indeed, the organization is quite explicit in it’s objective: “provide support for the local implementation of…appropriate levels of cognitive beheavioral therapy.”   Astonishingly, no other approach is accorded the same level of support or endorsement despite robust evidence of the equivalence of outcomes among treatment approaches.  Meanwhile, the review of the PTSD literature and treatment recommendations published by the Cochrane Collaboration has not been updated since 2007–a full two years following the publication of the Benish et al. (2008) meta-analysis–and that was penned by a prominent advocate of…CBT…Trauma-focused CBT.

As I blogged about back in January, researchers and prominent CBT proponents, published a critique of the Benish et al. (2008) meta-analysis in the March 2010 issue of Clinical Psychology Review (Vol. 30, No. 2, pages 269-76).  Curiously, the authors chose not to replicate the Benish et al. study, but rather claim that bias, arbitrariness, lack of transparency, and poor judgement accounted for the findings.   As I promised at the time, I’m making the response we wrote–which appeared in the most recent issue of Clinical Psychology Review—available here.

Of course, the most important finding of the Benish et al. (2008) and our later response (Wampold et al. 2010) is that mental health treatments work for people with post traumatic stress.  Such a conclusion is unequivocal.  At the same time, as we state in our response to the critique of Benish et al. (2008), “there is little evidence to support the conclusion…that one particular treatment for PTSD is superior to others or that some well defined ingredient is crucial to successful treatments of PTSD.”  Saying otherwise, belies the evidence and diverts attention and scarce resources away from efforts likely to improve the quality and outcome of behavioral health services.

View more documents from Scott Miller.

Filed Under: Behavioral Health, Practice Based Evidence Tagged With: Carl Rogers, continuing education, icce, post traumatic stress, PTSD, reimbursement

Error-centric Practice: How Getting it Wrong can Help you Get it Right

July 22, 2010 By scottdm 1 Comment

It’s an idea that makes intuitive sense but is simultanesouly unappealing to most people. I, for one, don’t like it.  What’s more, it flies in the face of the “self-esteem” orientation that has dominated much of educational theory and practice over the last several decades.  And yet, research summarized in a recent issue of Scientific American Mind is clear: people learn the most when conditions are arranged so that they have to make mistakes.   Testing prior to learning, for example, improves recall of information learned after failing the pre-test regarding that same information.  As is well known, frequent testing following learning and/or skill acquisition significantly enhances retention of knowledge and abilities.  In short, getting it wrong can help you get it right more often in the future.

So, despite the short term risk to my self-esteem, “error-centric learning” is an evidence-based practice that I’m taking to heart.  I’m not only applying the approach in the trainings I offer to mental health professionals–beginning all of my workshop with a fun, fact-filled quiz–but in my attempts to master a completely new skill in my personal life: magic and mind reading.  And if the number of mistakes I routinely make in these pursuits is a reliable predictor of future success, well…I should be a master mind reading magician in little more than a few days.

Enough for now–back to practicing.  Tonight, in my hotel room in Buffalo, New York, I’m working on a couple of new card tricks.  Take a look at the videos of two new effects I recorded over the weekend.  Also, don’t miss the interview with Cindy Voelker and John Catalino on the implementation of Feedback-Informed Treatment (FIT) at Spectrum Human Services here in Buffalo.

Filed Under: deliberate practice, Feedback Informed Treatment - FIT, Practice Based Evidence Tagged With: Alliance, behavioral health, cdoi, holland, Norway, randomized clinical trial, scientific american

Eruptions in Europe and in Research

April 18, 2010 By scottdm 3 Comments

Dateline: 11:20 am, April 18th, 2010

Today I was supposed to fly from Stockholm, Sweden to the far northern town of Skelleftea–a flight that takes a little over an hour.  Instead, I’m sitting on a train headed for Sundsvall, the first leg of a 12 hour trip that will include a 6 hour bus ride and then a short stint in a taxi.

If you’ve been following the news coming out of Europe, you know that all flights into, out of, and around Europe have been stopped. Eyjafjallajokull–an Icelandic volcano–erupted the day after I landed in Goteborg spewing an ash cloud that now covers most of Europe disrupting millions of travellers.  People are making due, sleeping on cots in airline, train, and bus terminals and using Facebook and Twitter to connect and arrange travel alternative.

In the meantime, another eruption has taken place with the publication of the latest issue of the Journal of Consulting and Clinical Psychology that threatens to be equally disruptive to the field of psychotherapy–and to proponents of the narrow, specific-treatments-for-specific-disorders or “evidence-based treatments” movement.   Researchers Webb, DeRubeis, and Barber conducted a meta-analysis of studies examining the relationship between adherence to and competence in delivering a particular approach and outcome.  The authors report finding that, “neither adherence nor competence was…related to patient (sic) outcome and indeed that the aggregate estimates of their effects were very close to zero.”

Zero!  I’m not sure what zero means to everyone else, but where I come from it’s pretty close to nothing.  And yet, the romance with the EBT movement continues among politicians, policy makers, and proponents of specific treatment models.  Each year, millions and millions of dollars of scarce resources are poured into an approach to behavioral health that accounts for exactly 0% of the results.

Although it was not a planned part of their investigation, the must-read study by Webb, DeRubeis, and Barber also points to the “magma” at the heart of effective psychotherapy: the alliance, or quality of the relationship between consumer and provider.  The authors report, for example, finding “larger competence-outcome effect size estimates [in studies that]…did not control for the influence of the alliance.”

The alliance will take center stage at the upcoming, “Achieving Clinical Excellence” and “Training of Trainers” events.  Whatever you thought you knew about effective therapeutic relationships will be challenged by the latest research from our study of top performing clinicians worldwide.  I hope you’ll join our international group of trainers, researchers, and presenters by clicking on either of the links above.  And, if you’ve not already done so, be sure and visit the International Center for Clinical Excellence home page and request an invitation to join the community of practitioners and researchers who are learning and sharing their expertise.

Filed Under: Behavioral Health, Practice Based Evidence Tagged With: behavioral health, brief therapy, continuing education, icce, Journal of Consulting and Clinical Psychology, Outcome, public behavioral health

Improving Outcomes in the Treatment of Obesity via Practice-Based Evidence: Weight Loss, Nutrition, and Work Productivity

April 9, 2010 By scottdm 4 Comments

Obesity is a large and growing problem in the United States and elsewhere.  Data gathered by the National Center for Health Statistics indicate that 33% Americans are obese.  When overweight people are added to the mix, the figure climbs to a staggering 66%!   The problem is not likely to go away soon or on its own as the same figures apply to children.

Researchers estimate that weight problems are responsible for over 300,000 deaths annually and account for 12% of healthcare costs or 100 billion–that’s right, $100,000,000,000–in the United States alone.   The overweight and obese have higher incidences of arthritis, breast cancer, heart disease, colorectal cancer, diabetes, endometrial cancer, gallbladder disease, hypertension, liver disease, back pain, sleeping problems, and stroke–not to mention the tremendous emotional, relational, and social costs.  The data are clear: the overweight are the target of discrimination in education, healthcare, and employment.  A study by Brownell and Puhl (2003), for example, found that: (1) a significant percentage of healthcare professionals admit to feeling  “repulsed” by obese person, even among those who specialize in bariatric treatment; (2) parents provide less college support to their overweight compared to “thin” children; and (3) 87% of obese individuals reported that weight prevented them from being hired for a job.

Sadly, available evidence indicates that while weight problems are “among the easiest conditions to recognize,” they remain one of the “most difficult to treat.”  Weight loss programs abound.  When was the last time you watched television and didn’t see an ad for a diet pill, program, or exercise machine?  Many work.  Few, however, lead to lasting change.

What might help?

More than a decade ago, I met Dr. Paul Faulkner, the founder and then Chief Executive Officer of Resources for Living (RFL), an innovative employee assistance program located in Austin, Texas.  I was teaching a week-long course on outcome-informed work at the Cape Cod Institute in Eastham, Massachusetts.  Paul had long searched for a way of improving outcomes and service delivery that could simultaneously be used to provide evidence of the value of treatment to purchasers–in the case of RFL, the large, multinational companies that were paying him to manage their employee assistance programs.  Thus began a long relationship between me and the management and clinical staff of RFL.  I was in Austin, Texas dozens of times providing training and consultation as well as setting up the original ORS/SRS feedback system known as ALERT, which is still in use at the organization today.  All of the original reliability, validity, norming, and response trajectories were done together with the crew at RFL.

Along the way, RFL expanded services to disease management, including depression, chronic obstructive pulmonary disease, diabetes, and obesity.  The “weight management” program delivered coaching and nutritional consultation via the telephone informed by ongoing measurement of outcomes and the therapeutic alliance using the SRS and ORS.  The results are impressive.  The study by Ryan Sorrell, a clinician and researcher at RFL, not only found that the program and feedback led to weight loss, but also significant improvements in distress, health eating behaviors (70%), exercise (65%), and presenteeism on the job (64%)–the latter being critical to the employers paying for the service.

Such research adds to the growing body of literature documenting the importance of “practice-based” evidence, making clear that finding the “right” or “evidence-based” approach for obesity (or any problem for that matter) is less important than finding out “what works” for each person in need of help.  With challenging, “life-style” problems, this means using ongoing feedback to inform whatever services may be deemed appropriate or necessary.  Doing so not only leads to better outcomes, but also provides real-time, real-world evidence of return on investment for those footing the bill.

Filed Under: Behavioral Health, Feedback Informed Treatment - FIT, Practice Based Evidence Tagged With: behavioral health, cdoi, cognitive-behavioral therapy, conferences, continuing education, diabetes, disease management, Dr. Paul Faulkner, evidence based medicine, evidence based practice, Hypertension, medicine, obesity, ors, outcome rating scale, practice-based evidence, public behavioral health, randomized clinical trial, session rating scale, srs, Training

Problems in Evidence-Based Land: Questioning the Wisdom of "Preferred Treatments"

March 29, 2010 By scottdm Leave a Comment

This last week, Jeremy Laurance, Health Editor for the U.K. Independent published an article entitled, “The big question: Does cognitive therapy work? And should the NHS (National Health Service) provide more of it?” Usually such questions are limited to professional journals and trade magazines. Instead, it ran in the “Life and Style” section of one of Britain’s largest daily newspapers. Why?

In 2007, the government earmarked £173,000,000 (approximately 260,000,000 U.S. dollars) to train up an army of new therapists. Briefly, the money was allocated following an earlier report by Professor Richard Layard of the London School of Economics which found that a staggering 38% of illness and disability claims were accounted for by “mental disorders.” The sticking point—and part of the reason for the article by Laurance—is that training was largely limited to a single treatment approach: cognitive-behavioral therapy (CBT).  And research released this week indicates that the efficacy of the method has been seriously overestimated due to “publication bias.”
Researchers Cuijpers, Smith, Bohlmeijer, Hollon, and Andersson (2010) examined the “effect sizes” of 117 trials and found that the tendency of journals to accept trials that showed positive results and reject those with null or negative findings reduced the reported effectiveness of CBT by as much as 33 percent!
Combine such findings with evidence from multiple meta-analyses showing no difference in outcome between treatment approaches intended to be therapeutic and one has to wonder why CBT continues to enjoy a privileged position among policy makers and regulatory bodies.  Despite the evidence, the governmental body in the UK that is responsible for reviewing research and making policy recommendations—National Institute for Health and Clinical Excellence (NICE)–continues to advocate for CBT.  It’s not only unscientific, its bad policy. Alas, when it comes to treatment methods, CBT enjoys what British psychologist Richard Wiseman calls, the “get out of a null effect free” card.
What would work? If the issue is truly guaranteeing effective treatment, the answer is measurement and feedback.  The single largest contributor to outcome is who provides the treatment and not what treatment approach is employed.  More than a dozen randomized clinical trials—the design of choice of NICE and SAMSHA—indicate that outcomes and retention rates are improved while costs are decreased—in many cases dramatically so.
I respectfully ask, “What is the hold up?”

Filed Under: Practice Based Evidence Tagged With: CBT, cdoi, cognitive-behavioral therapy, conferences, evidence based practice, icce, Jeremy Laurance, National Institute for Health and Clinical Excellence (NICE), randomized clinical trial, Richard Layard, Richard Wiseman

Neurobabble: Comments from Dr. Mark Hubble on the Latest Fad in the World of Therapy

March 24, 2010 By scottdm Leave a Comment


Rarely does a day go by without hearing about another “advance” in the neurobiology of human behavior.  Suddenly, it seems, the world of psychotherapy has discovered that people have brains!  And now where the unconscious, childhood, emotions, behaviors, and cognitions once where…neurons, plasticity, and magnetic resonance imagining now is.  Alas, we are a field forever in search of legitimacy.  My long time colleague and friend, Mark Hubble, Ph.D., sent me the following review of recent developments.  I think you’ll enjoy it, along with video by comedian John Cleese on the same subject.

Mark Hubble, Ph.D.

Today, while contemplating the numerous chemical imbalances that are unhinging the minds of Americans — notwithstanding the longstanding failure of the left brain to coach the right with reason, and the right to enlighten the left with intuition — I unleashed the hidden power of my higher cortical functioning to the more pressing question of how to increase the market share for practicing therapists. As research has dismantled once and for all the belief that specific treatments exist for specific disorders, the field is left, one might say, in an altered state of consciousness. If we cannot hawk empirically supported therapies or claim any specialization that makes any real difference in treatment outcome, we are truly in a pickle. All we have is ourselves, the relationships we can offer to our clients, and the quality of their participation to make it all work. This, of course, hardly represents a propitious proposition for a business already overrun with too many therapists, receiving too few dollars.

Fortunately, the more energetic and enterprising among us, undeterred by the demise of psychotherapy as we know it, are ushering the age of neuro-mythology and the new language of neuro-babble.   Seemingly accepting wholesale the belief that the brain is the final frontier, some are determined to sell us the map thereto and make more than a buck while they are at it. Thus, we see terms such as “Somatic/sensorimotor Psychotherapy,” “Interpersonal Neurobiology,” “Neurogenesis and Neuroplasticity,”  “Unlocking the Emotional Brain,” “NeuroTherapy,” “Neuro Reorganization,” and so on.  A moment’s look into this burgeoning literature quickly reveals the existence of an inverse relationship between the number of scientific sounding assertions and actual studies proving the claims made. Naturally, this finding is beside the point, because the purpose is to offer the public sensitive, nuanced brain-based solutions for timeless problems. Traditional theories and models, are out, psychotherapies-informed-by-neuroscience, with the aura of greater credibility, are in.

Neurology and neuroscience are worthy pursuits. To suggest, however, that the data emerging from these disciplines have reached the stage of offering explanatory mechanisms for psychotherapy, including the introduction of “new” technical interventions, is beyond the pale. Metaphor and rhetoric, though persuasive, are not the same as evidence emerging from rigorous investigations establishing and validating cause and effect, independently verified, and subject to peer review.

Without resorting to obfuscation and pseudoscience, already, we have a pretty good idea of how psychotherapy works and what can be done now to make it more effective for each and every client. From one brain to another, to apply that knowledge, is a good case of using the old noggin.

Filed Under: Brain-based Research, Practice Based Evidence Tagged With: behavioral health, brief therapy, continuing education, mark hubble, meta-analysis, neuro-mythology, Norway, psychotherapy, public behavioral health

Practice-Based Evidence in Norway: An Interview with Psychologist Mikael Aagard

January 19, 2010 By scottdm Leave a Comment

For those of you following me on Facebook–and if you’re not, click here to start–you know that I was traveling above the arctic circle in Norway last week.  I always enjoy visiting the Scandinavian countries.  My grandparents immigrated from nearby Sweden.  I lived there myself for a number of years (and speak the language).  And I am married to a Norwegian!  So, I consider Scandinavia to be my second home.

In a prior post, I talked a bit about the group I worked with during my three day stay in Tromso.  Here, I briefly interview psychologist Mikael Aagard, the organizer of the conference.  Mikael works at KORUS Nord, an addiction technology transfer center, which sponsored the training.  His mission?  To help clinicians working in the trenches stay up-to-date with the research on “what works” in behavioral health.  Judging by the tremendous response–people came from all over the disparate regions of far northern Norway to attend the conference–he is succeeding.

Listen as he describes the challenges facing practitioners in Norway and the need to balance the “evidence-based practice” movement with “practice-based evidence.”  If you’d like any additional information regarding KORUS, feel free to connect with Mikael and his colleagues by visiting their website.  Information about the activities of the International Center for Clinical Excellence in Scandinavia can be found at: www.centerforclinicalexcellence.org.

Filed Under: Behavioral Health, Drug and Alcohol, evidence-based practice, Practice Based Evidence Tagged With: cdoi, evidence based practice, Hyperlipidemia, icce, meta-analysis, psychotherapy

Evidence-based practice or practice-based evidence? Article in the Los Angeles Times addresses the debate in behavioral health

January 18, 2010 By scottdm Leave a Comment


January 11th, 2010

“Debate over Cognitive & Traditional Mental Health Therapy” by Eric Jaffe

The fight debate between different factons, interest groups, scholars within the field of mental health hit the pages of the Los Angeles Times this last week. At issue?  Supposedly, whether the field will become “scientific” in practice or remain mired in traditions of the past.  On the one side are the enthusiastic supporters of cognitive-behavioral therapy (CBT) who claim that existing research provides overwhelming support for the use of CBT for the treatment of specific mental disorders.  On the other side are traditional, humanistic, “feel-your-way-as-you-go” practitioners who emphasize quality over the quantitative.

My response?  Spuds or potatoes.  Said another way, I can’t see any difference between the two warring factions.  Yes, research indicates the CBT works.  That exact same body of literature shows overwhelmingly, however, that any and all therapeutic approaches intended to be therapeutic are effective.  And yes, certainly, quality is important.  The question is, however, “what counts as quality?” and more importantly, “who gets to decide?”

In the Los Angeles Times article, I offer a third way; what has loosely been termed, “practice-based evidence.”  The bottom line?  Practitioners must seek and obtain valid, reliable, and ongoing feedback from consumers regarding the quality and effectiveness of the services they offer.  After all, what person following unsuccessful treatment would say, “well, at least I got CBT!” or, “I’m sure glad I got the quality treatment.”

Filed Under: Behavioral Health, Dodo Verdict, Practice Based Evidence Tagged With: behavioral health, cognitive-behavioral therapy (CBT), evidence based practice, icce, Los Angeles Times, mental health, meta-analysis, public behavioral health

Are all treatments approaches equally effective?

January 9, 2010 By scottdm Leave a Comment

Bruce Wampold, Ph.D.

Late yesterday, I blogged about a soon-to-be published article in Clinical Psychology Review in which the authors argue that the finding by Benish, Imel, & Wamppold (2008) of equivalence in outcomes among treatments for PTSD was due to, “bias, over-generalization, lack of transparency, and poor judgement.”  Which interpretation of the evidence is correct?  Are there “specific approaches for specific disorders” that are demonstrably more effective than others?  Or does the available evidence show all approaches intended to be therapeutic to be equally effective?

History makes clear that science produces results in advance of understanding.  Until the response to Ehlers, Bisson, Clark, Creamer, Pilling, Richards, Schnurr, Turner, and Yule becomes available, I wanted to remind people of three prior blog posts that review the evidence regarding differential efficacy of competing therapeutic approaches.  The first (and I think most illuminating)–“The Debate of the Century“–appeared back in August.  The post featured a link to a debate between Bruce Wampold and enthusiastic proponent of “empirically supported treatments,” Steve Hollon.  Listen and then see if you agree with the large group of scientists and practitioners in attendance who thought–by a margin of 15:1–that Bruce carried the day.

The second post–Whoa Nellie!– commented on a 25 Million US$ research grant awarded by the US Department of Defense to study treatments for PTSD.  Why does this make me think of “deep throat’s” admonition to, “follow the money!”  Here you can read the study that is causing the uproar within the “specific treatments for specific disorders” gang.

Third, and finally, if you haven’t already read the post “Common versus Specific Factors and the Future of Psychotherapy,” I believe you’ll find the thorough review of the research done in response to an article by Siev and Chambless critical of the “dodo verdict” helpful.

Filed Under: Behavioral Health, evidence-based practice, Practice Based Evidence, PTSD Tagged With: behavioral health, bruce wampold, Children, continuing education, icce, post traumatic stress, PTSD, public behavioral health

  • 1
  • 2
  • Next Page »

SEARCH

Subscribe for updates from my blog.

loader

Email Address*

Name

Upcoming Training

Jun
03

Feedback Informed Treatment (FIT) Intensive ONLINE


Oct
01

Training of Trainers 2025


Nov
20

FIT Implementation Intensive 2025

FIT Software tools

FIT Software tools

LinkedIn

Topics of Interest:

  • Behavioral Health (112)
  • behavioral health (5)
  • Brain-based Research (2)
  • CDOI (14)
  • Conferences and Training (67)
  • deliberate practice (31)
  • Dodo Verdict (9)
  • Drug and Alcohol (3)
  • evidence-based practice (67)
  • excellence (63)
  • Feedback (40)
  • Feedback Informed Treatment – FIT (246)
  • FIT (29)
  • FIT Software Tools (12)
  • ICCE (26)
  • Implementation (7)
  • medication adherence (3)
  • obesity (1)
  • PCOMS (11)
  • Practice Based Evidence (39)
  • PTSD (4)
  • Suicide (1)
  • supervision (1)
  • Termination (1)
  • Therapeutic Relationship (9)
  • Top Performance (40)

Recent Posts

  • Agape
  • Snippets
  • Results from the first bona fide study of deliberate practice
  • Fasten your seatbelt
  • A not so helpful, helping hand

Recent Comments

  • Bea Lopez on The Cryptonite of Behavioral Health: Making Mistakes
  • Anshuman Rawat on Integrity versus Despair
  • Transparency In Therapy and In Life - Mindfully Alive on How Does Feedback Informed Treatment Work? I’m Not Surprised
  • scottdm on Simple, not Easy: Using the ORS and SRS Effectively
  • arthur goulooze on Simple, not Easy: Using the ORS and SRS Effectively

Tags

addiction Alliance behavioral health brief therapy Carl Rogers CBT cdoi common factors conferences continuing education denmark evidence based medicine evidence based practice Evolution of Psychotherapy excellence feedback feedback informed treatment healthcare holland icce international center for cliniclal excellence medicine mental health meta-analysis Norway NREPP ors outcome measurement outcome rating scale post traumatic stress practice-based evidence psychology psychometrics psychotherapy psychotherapy networker public behavioral health randomized clinical trial SAMHSA session rating scale srs supershrinks sweden Therapist Effects therapy Training