Recent study documents the effectiveness of psychotherapy has been overstated: An example of an RFTM and PEBKAC problem


Not being a computer nerd, I’d never come across these expressions.  My 14-year-old son was the first person I heard use the terms.  He was referring to a problem I was having with my desktop computer.  To be sure, I’m no Luddite.  Still, “computer” will always be a second language for me.

With a restart and few clicks of the mouse, he resolved my issue.  When I asked him to explain what had caused the problem–hoping either to avoid or be able to resolve a similar occurrences in the future–he quipped, “Dad, it’s an RTFM problem, most likely in the PEBKAC.”

“RTFM problem?  In the PEBKAC?”

“Yeah,” he said with a laugh, then walked away.

Consulting Google, I quickly learned what the terms meant: Read the F%$&ing Manual as the Problem Exists Between the Keyboard and the Computer.  Swallowing my pride (and a fair bit of irritation), I had to admit my son was right.  I had not read the manual.  I didn’t want to read the manual.  I WANTED MY PROBLEM SOLVED!  As a result, I’d spent an increasingly frustrating hour, first tinkering, then on the phone with less than helpful customer service representative.

So, what’s this got to with psychotherapy?

Over the weekend, the twittersphere lit up with posts about a story in the New York TimesEffectiveness of Talk Therapy is Overstated.  The article reported on a new study which had found that psychotherapy was “25% less effective…than previously thought.”

The response to the story was swift, questioning, for the most part, motives and methodology:

  • Who published this study and why?
  • What kind of therapy was studied?  
  • Why the emphasis on quantitative studies?
  • Why is the media always so negative about therapy?
  • Is this the whole picture?

The reaction is understandable.  The headline and story are enough to give any practicing therapist pause.  More so because, as I reviewed in my recent presentation at the Evolution of Psychotherapy conference, they are already working in an challenging practice environment.  Rucost_of_living_income_4x3_1-300x225les and regulation are on the increase.  Incomes are on the decline.  They know the value of the work they do.  They can see it in the people they treat.  Instead of recognizing the value of the services offered, the effectiveness of the field, it’s methods, and practitioners are called into question.

The interaction with my son still fresh in my mind, I wondered, “Could this be a RTFM problem located in PEBKAC?”

Said another way, “Had anyone actually read the study?!”  Despite assurances that “the facts are always friendly” from the likes of Carl Rogers, we know therapists don’t read research, for example.  How do we know?  RESEARCH!

In truth, the study is a merely an empirical call for more openness and transparency in publication of psychotherapy research.  Inflated estimates of effectiveness help no one.  Not practitioners.  Not clients.  Not the field.

The very same factors that lead the media to highlight the most attention-grabbing aspects of a news story, influence what gets submitted, reviewed, and published in scholarly journals.  Sad, but true.  To get the full picture–to determine “what really works”–results from all research–whether published or not–must be tracked and reported.  Not surprisingly, when you get beyond the headlines, the story is almost always less dramatic and more nuanced.

Additionally, the research article contains some real gems!  For example:

  • Psychotherapy was found clearly superior to a variety of placebo and no-treatment controls, including treatment-as-usual, pill-placebo, and non-specific control conditions.
  • Consistent with research reported on this blog, no differences in outcome were found between treatment approaches!
  • No differences in outcome were found between psychological treatments and anti-depressant medication.
  • Finally, the effect of psychotherapy plus medications was superior to anti-depressant medication alone.

What’s not to like?

And while we’re on the subject, this week another study was published.  I’m sure there will be no headline in the New York Times, despite the fact that it’s the largest psychotherapy outcome study in history!   What are the results?  RTFM!

Until next time,


Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence

PS: Registration is open for the ICCE March Intensives:

Feedback Informed Treatment Advanced Intensive (March 17-19, 2016)
Feedback Informed Treatment Supervision Intensive (March 21-23, 2016)

The Verdict is “In”: Feedback is NOT enough to Improve Outcome






Nearly three years have passed since I blogged about claims being made about the impact of routine outcome monitoring (ROM) on the quality and outcome of mental health services.  While a small number of studies showed promise, others results indicated that therapists did not learn from nor become more effective over time as a result of being exposed to ongoing feedback.  Such findings suggested that the focus on measures and monitoring might be misguided–or at least a “dead end.”

Well, the verdict is in: feedback is not enough to improve outcomes.  Indeed, researchers are finding it hard to replicate the medium to large effects sizes enthusiastically reported in early studies, a well-known phenomenon called the “decline effect,” observed across a wide range of scientific disciplines.






In a naturalistic multisite randomized clinical trial (RCT) in Norway, for example, Amble, Gude, Stubdal, Andersen, and Wampold (2014) found the main effect of feedback to be much smaller (d = 0.32), than the meta-analytic estimate reported by Lambert and Shimokawa (2011 [d = 0.69]).  A more recent study (Rise, Eriksen, Grimstad, and Steinsbeck, 2015) found that routine use of the ORS and SRS had no impact on either patient activation or mental health symptoms among people treated in an outpatient setting.  Importantly, the clinicians in the study were trained by someone with an allegiance to the use of the scales as routine outcome measures.

Fortunately, a large and growing body of literature points in a more productive direction.  Consider the recent study by De Jong, van Sluis, Nugter, Heiser, and Spinhoven (2012), which found that a variety of therapist factors moderated the effect ROM had on outcome. Said another way, in order to realize the potential of feedback for improving the quality and outcome of psychotherapy, emphasis must shift away from measurement and monitoring and toward the development of more effective therapists.

What’s the best way to enhance the effectiveness of therapists?  Studies on expertise and expert performance document a single, underlying trait shared by top performers across a variety of endeavors: deep domain-specific knowledge.  In short, the best know more, see more and, accordingly, are able to do more.  The same research identifies a universal set of processes that both account for how domain-specific knowledge is acquired and furnish step-by-step directions anyone can follow to improve their performance within a particular discipline.  Miller, Hubble, Chow, & Seidel (2013) identified and provided detailed descriptions of three essential activities giving rise to superior performance.  These include: (1) determining a baseline level of effectiveness; (2) obtaining systematic, ongoing feedback; and (3) engaging in deliberate practice.

I discussed these three steps and more, in a recent interview for the IMAGO Relationships Think Tank.  Although intended for their members, the organizers graciously agreed to allow me to make the interview available here on my blog. Be sure and leave a comment after you’ve had a chance to listen!

Until next time,


Scott D. Miller, Ph.D.


Intake: A Mistake

bad idea





Available evidence leaves little doubt.  As I’ve blogged about previously, separating intake from treatment results in:

• Higher dropout rates;
• Poorer outcomes;
• Longer treatment duration; and
• Higher costs

And yet, in many public behavioral health agencies, the practice is commonplace. What else can we expect?

Chronically underfunded, and perpetually overwhelmed by mindless paperwork and regulation, agencies and practitioners are left with few options to meet the ever-rising number of people in need of help. Between 2009 and 2012, for example, the number of people receiving mental health services increased by 10%. During the same period, funding to state agencies decreased $4.35 billion. Not long ago, in my own home town of Chicago, the city shuttered half—50%–of the city’s mental health clinics, forcing the remaining, already burdened, agencies to absorb an additional 5,000 people in need of care.





Simply put, the practice of separating intake from treatment is little more than a form of “crowd management”–and an ineffective one at that.

feedback keyboard





Adding to the growing body of evidence is a new study investigating the impact of computerized intake on the consumer’s experience of the therapeutic relationship and continuation in care. Not only did researchers find that therapist use of a computer had a negative impact on the quality of the working relationship—one of the best predictors of outcome–but clients were between 62 and 97% less likely to continue in care!






It’s not hard to see how these well-intentioned—some would argue, absolutely necessary—solutions actually end up exacerbating the problem. Money is wasted when the paperwork is completed but people don’t come back; money that would be better spent providing treatment. Those who do not return don’t disappear, they simply access services in other ways (e.g., the E.R., police and social services, etc.)—after all, they need help! The ones who do continue after intake, experience poorer outcomes and stay longer in care, a cost to both the consumer and the system.

What to do?






In addition to pushing back against the mindless regulation and paperwork, there are several steps practitioners and agency managers can take:

  • Stop separating intake from treatment

The practices does not save time and actually increases costs. Consider having consumers complete as much of the paperwork as possible before the session begins. The first visit is critical. It determines whether people continue or drop pout. Listen first. At the end of the visit, review the paperwork, filling in missing data, and completing any remaining forms.

  • Begin monitoring outcome

Research to date shows that routinely monitoring progress reduces dropout rates and the length of time spent in treatment while simultaneously improving outcome. Combined, such results work to alleviate the bottleneck at the entry point of services.

  • Begin monitoring the quality of the therapeutic relationship:

Engagement and outcomes are improved when problems in the relationship are identified and openly discussed. Even when intake is separated from treatment, feedback should be sought. Data to date indicate that the most effective clinicians seek and more often receive negative feedback, a skill that enables them to better meet the needs of those they serve.

Getting started is not difficult. Indeed, there’s an entire community of professionals just a click away who are working with and learning from one another. The International Center for Clinical Excellence is the largest, web based community of mental health professionals in the world. It’s ad free and costs nothing to join.

Sign up for the ICCE Fall Webinar. You will learn:

  • The Empirical Basis for Feedback Informed Treatment
  • Basics of Outcome and Alliance Measurement
  • Integrating Feedback into Practice & Creating a Culture of Feedback
  • Understanding Outcome and Alliance Data

Register online at: CE’s are available.

Finally, join colleagues and friends from around the world for the Advanced and FIT Supervision courses are held in March in Chicago. We work and play hard. You will leave with a thorough grounding in feedback-informed principles and practice. Registration is limited, and the courses tend to sell out several month in advance.

Until then,


Scott D. Miller, Ph.D. Director, International Center for Clinical Excellence

Scott D. Miller - Australian Drug and Alcohol Symposium


What’s happening to CBT? And why all the hoopla misses the point


In May 2012, I blogged about results from a Swedish study examining the impact of psychotherapy’s “favorite son”–cognitive behavioral therapy–on the outcome of people disabled by depression and anxiety.  Like many other Western countries, the percentage of people in Sweden disabled by mental health problems was growing dramatically.  Costs were skyrocketing.  Even with treatment, far too many left the workforce permanently.

Sweden embraced “evidence-based practice”–most popularly construed as the application of specific treatments to specific disorders–as a potential solution.  Socialstyrelsen, the country’s National Board of Health and Welfare, developed and disseminated a set of guidelines (“riktlinger”) specific to mental health practice.  Topping the list?  CBT.

A billion crowns were spent training clinicians in the method; another billion using it to treat people with diagnoses of depression and anxiety.   As I reported at the time, the State’s “return on investment” was zilch.  Said another way, the widespread adoption of method had no effect whatsoever on outcome (see Socionomen, Holmquist Interview).   Not only that but many who were not disabled at the time they were treated with CBT became disabled along the way, bringing the total price tag, when combined with the 25% who dropped out of treatment, to a staggering 3.5 billon!

And now, a new study–this time from Norway, Sweden’s neighbor to the west.
Norwegian researchers looked at how the effectiveness of CBT has fared over time.  Examining data from 70 randomized clinical trials, study authors Johnsen and Friborg found the approach to be roughly half as effective as it was four decades ago.  Mind you, not 10 or 20 percent.  Not 30 or 40.  Fifty percent less effective!  Cause for concern, to be sure.

So, what’s happening to CBT?  Is the “favored son” losing its effectiveness?

Naturally, the results published by the Norwegian researchers generated a great deal of activity in social media.  Critics were gleeful (see the comments at the end of the article).  Proponents, of course, questioned the results.

If the findings are confirmed in subsequent studies, CBT will be in remarkably good company.  Across a variety of disciplines–pharmacology, medicine, zoology, ecology, physics–promising findings often “lose their luster,” with many fading away completely over time (Lehrer, 2010; Yong, 2012).  Alas, even in science, the truth occasionally wears off.  In psychiatry and psychology, this phenomenon, known as the “decline effect,” is particularly vexing.

That said, while the study and commentary have managed to generate a modest amount of heat, they’ve shed precious little light on the question of how to improve the outcome of psychotherapy.  After all, that’s what led Sweden to invest so heavily in CBT in the first place–doing so, it was believed, would improve the effectiveness of care.  So today, I called Rolf Holmqvist.

Rolf is a professor in the Department of Behavioral Science and Learning at Linköping University.  He’s also the author of the Swedish study I blogged about over three years ago.  I wanted to catch up, find out what, if anything, had happened since he published his results.

“Some changes were made in the guidelines some time ago.  In the case of depression, for example, the guidelines have become a little more open, a little broader.  CBT is always on top, along with IPT, but psychodynamic therapy is now included…although it’s further down on the list.”

Sounded like progress, until Rolf continued, “They are broadening a bit.  Still the fact is that if you look at the research, for example, with mild and moderate depression, almost any method works if it’s done systematically.”

Said another way, despite the lack of evidence for the differential effectiveness of psychotherapeutic approaches–in this case, CBT for depression–the mindset guiding the creation of lists of “specific treatments for specific disorders” remains.

Rolf’s sentiments are echoed by uber-researchers, Wampold and Imel (2015), who very recently pointed out, “Given the evidence that treatments are about equally effective, that treatments delivered in clinical settings are effective (and as effective as that provided in clinical trials), that the manner in which treatments are provided much more important than which treatment is provided, mandating particular treatments seems illogical. In addition, given the expense involved in “rolling out” evidence-based treatments in private practices, agencies, and in systems of care, it seems unwise to mandate any particular treatment.”

Right now, in Sweden, an authority within the Federal government (Riksrevisorn) is conducting an investigation evaluating the appropriateness of funds spent on training and delivery of CBT.  In an article published yesterday in one of the countries largest newspapers , Rolf Holmqvist argues, “Billions spent–without any proven results.”

Returning to the original question: what can be done to improve the outcome of psychotherapy?

“We need transparent evaluation systems,” Rolf quickly answered, “that provide feedback at each session about the progress of treatment.  This way, therapists can begin to look at individual treatment episodes, and be able to see when, where, and with whom they are and are not successful.”

“Is that on the agenda?” I asked, hopefully.

“Well,” he laughed, “here, we need to have realistic expectations.  The idea of recommending that you should employ a clinician because they are effective and a good person, rather than because they can do a certain method, is hard for regulatory agencies like Socialstyrelsen.  They think of clinicians as learning a method, and then applying that method, and that its the method that makes the process work…”

“Right,” I thought, “mindset.”

“…and that will take time,” Rolf said, “but I am hopeful.”

But, you don’t have to wait.  You can begin tracking the quality and outcome of your work right now.  It’s easy and free.  Click here to access two simple scales–the ORS and SRS.  the first measures progress; the second, the quality of the working relationship.

Next, read our latest article on how the field’s most effective practitioners use the measures to, as Rolf advised, “identify when, where, and with whom” they are and are not successful, and what steps they take to improve their effectiveness.

Finally, join colleagues from around the world for our Fall Webinar on “Feedback-Informed Treatment.”

Fall webinar 2015
We’ll be covering everything you need to know to integrate feedback into your clinical practice.

Until next time,


Scott D. Miller, Ph.D.
International Center for Clinical Excellence
Scott D. Miller - Australian Drug and Alcohol Symposium






Love, Mercy, & Adverse Events in Psychotherapy

Just over a year ago, I blogged about an article that appeared in one of the U.K.’s largest daily newspapers, The Guardian.  Below a picture of an attractive, yet dejected looking woman (reclined on a couch), the caption read, “Major new study reveals incorrect…care can do more harm than good.”

I was interested.

As I often do in such cases, I wrote directly to the researcher cited in the article asking for a reprint or pre-publication copy of the study.  No reply.  One month later, I wrote again.  Still, no reply. Two months after my original email, I received a brief note thanking me for my interest in the study and offering to share any results once they became available.

“Wait a minute,” I immediately thought, “The results of this ‘major new study’ about the harmful effects of psychotherapy had already been announced in a leading newspaper.  How could they not be available?”  Then I wondered, “If there are no actual results to share, what exactly was the article in The Guardian based on?”

So-called “adverse events” are a hot topic at the moment.  That some people deteriorate while in care is not in question.  Research dating back several decades puts the figure at about 10%, on average (Lambert, 2010). When those being treated are adolescents or children, the rates are twice as high (Warren et al., 2009).

Putting this in context, compared to medical procedures with effect sizes similar to psychotherapy (e.g., coronary artery bypass surgery, stages II and III breast cancer, stroke), the rate is remarkably low.  Nonetheless, it is a matter of concern–especially given research showing that therapists are not particularly adept at recognizing when those they serve deteriorate in their care (Hannan et al., 2005)

The question, of course, is the cause?

To date, whenever the question of adverse events is raised, two “usual suspects” are trotted out: (1) the method of treatment used; and (2) the therapist.  Let’s take a closer look at each.

In an October 2914 article published in World Psychiatry, Linden and Schermuly-Haupt wrote about estimates of side effects associated with specific methods of treatment that had been reported in an earlier study by Swiss researchers.  The numbers were shocking.  Patient reported “burdens caused by therapy” were 19.7% with CBT, 20.4% for systemically oriented treatments, 64.8% with humanistic approaches, and a staggering 94.1% with psychodynamic psychotherapy.

Based on such results, one could only conclude that anyone seeking anything other than CBT should have their head examined.

There is only one problem.  The figures reported were wrong.  Completely and utterly wrong.  Linden and Schermuly-Haupt made an arithmetic error and, as a result, totally misinterpreted the Swiss findings.  Read the study for yourself.  When it comes to adverse events in psychotherapy, CBT–the fair-haired child of the evidence-based practice movement–is not better.  Indeed, as the study clearly shows, people treated with humanistic and systemic approaches suffered fewer “burdens” than expected, while those in CBT had a slightly higher, although not statistically significant, level. More, the observed percentage of people in care who perceived the quality of the therapeutic relationship–the single most potent predictor of engagement and outcome–as poor was significantly higher than expected in CBT and lower for both humanistic and systemic approaches.

How could the researchers have gotten it so wrong?

As I pointed out in my blog over year ago, despite claims to the contrary (e.g., Lilenfeld, 2007), no psychotherapy approach tested in a clinical trial has ever been shown to reliably lead to or increase the chances of deterioration.  NONE.  Scary stories about dangerous psychological treatments are limited to a handful of fringe therapies–approaches that have been never vetted scientifically and which all practitioners, but a few, avoid.  In short, its not about the method.

(By the way, over a month ago, I wrote to the lead author of the paper that appeared in World Psychiatry via the ResearchGate portal–a site where scholars meet and share their publications–providing a detailed breakdown of the statistical errors in the publication.  No response thusfar)

With only one suspect left, attention naturally turns to the therapist–you know, the “bad apple” in the bunch.  Here’s what we know.  That some practitioners do more harm than others is not exactly news.  Have you seen the new biopic Love & Mercy, about the life of Beach Boy Brian Wilson?  You should.  The acting is superb.

Wilson’s therapist, psychologist Eugene Landy (chillingly recreated by actor Paul Giamatti), is a prime example of an adverse event.  See the film and you’ll most certainly wonder how the guy kept his license to practice so long.  And yet, as I also pointed out in my blog last year, there are too few such practitioners to account for the total number of clients who worsen.  Consider this unsettling fact: beyond the 10% of those who deteriorate in psychotherapy, an additional 30 and 50% experience no benefit whatsoever!

Where does this leave us when it comes to adverse events in psychotherapy?

Whatever the cause, lack of progress and risk of deterioration are issues for all clinicians and clients.   The key to addressing these problems is tracking progress from visit to visit so that those not improving, or getting worse, can be identified and offered alternatives.  It’s that simple.

Right now, practitioners can access two simple, easy-to-use scales for free at:  Both have been tested in multiple, randomized, clinical trials and deemed evidence-based by the Federal Substance Abuse and Mental Health Services Administration (SAMHSA).

Learning to use the tools isn’t difficult.  It costs nothing to join the International Center for Clinical Excellence and begin interacting with professionals around the world who are using the measures to improve the quality and outcome of behavioral health services.  More detailed instruction is available at the upcoming webinar:

Fall webinar 2015
Join us in tackling the issue of adverse events in psychotherapy.  In the meantime, be sure and leave a comment below.

Best wishes for the summer,


Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence

P .S.: On the one year anniversary of my original email to the reseacher cited in the Guardian, I sent another.  That’s over a month ago.  So far, no reply.  By contrast, the reporter who broke the story,  Sarah Boseley, wrote back within a half hour!  She’s following up her sources.  I’ll let you know if she gets a response.