SCOTT D Miller - For the latest and greatest information on Feedback Informed Treatment

  • About
    • About Scott
    • Publications
  • Training and Consultation
  • Workshop Calendar
  • FIT Measures Licensing
  • FIT Software Tools
  • Online Store
  • Top Performance Blog
  • Contact Scott
scottdmiller@ talkingcure.com +1.773.454.8511

Seeing What Others Miss

March 13, 2022 By scottdm Leave a Comment

It’s one of my favorite lines from one of my all time favorite films.  Civilian Ellen Ripley (Sigourney Weaver) accompanies a troop of “colonial marines” to LV-426.  Contact with the people living and working on the distant exomoon has been lost.  A formidable life form is suspected.  The Alien.  Ripley is on board as an advisor.  The only person that’s ever met the creature.  The lone survivor of a ship whose crew was decimated hours after first contact.

On arrival, Ripley briefs the team.  Her description and warnings are met with a mixture of determination and derision by the tough, experienced, highly-trained, and well-equipped soldiers.   On touch down, the group immediately jumps into action.  First contact does not go well.  Confidence quickly gives way to chaos and confusion.  Not only do many die, but the actions they take to defend themselves inadvertently damages a nuclear reactor.

If Ripley and the small group that remains hope to survive, they must get off the planet as soon as possible.  With senior leaders out of commission, command decisions fall to a lowly corporal named, Dwayne Hicks.  His team is tired and facing overwhelming odds.  It’s then he utters the line.  “Hey, listen,” he says, “We’re all in strung out shape, but stay frosty, and alert …”.

Stay frosty and alert.

Sage counsel –advice which, had it been heeded from the very outset of the journey, would likely have changed the course of events — but also exceedingly difficult to do.  Sounds.  Smells.  Flavors.  Touch.  Motion.  Attention.  Most behaviors.  Once we become accustomed to them, they disappear from consciousness.

Said another way, experience dulls the senses.  Except when it doesn’t.   Turns out, some are less prone to habituation.

In his study of highly effective psychotherapists, for example, my colleague Dr. Daryl Chow (2014), found, “the number of times therapists were surprised by clients’ feedback … was … a significant predictor of client outcome” (p. xiii).   Turns out, highly effective therapists frequently see something important in what average practitioners conclude is simply, “more of the same.”  It should come as no surprise then that a large body of evidence finds no correlation between therapist effectiveness and their age, training, professional degree or certification, case load, or amount of clinical experience (1, 2).

Staying “frosty and alert” is the subject of Episode 5 of The Book Case Podcast.  Together with my colleague, Dr. Dan Lewis, we review 3 new books, each organized around overcoming the natural human tendency to develop attentional  biases and blind spots.   Be sure and leave a comment after listening.

Filed Under: deliberate practice, Feedback, Feedback Informed Treatment - FIT, FIT

Session Frequency and Outcome: What is the “Right Dose” for Effective Psychotherapy?

December 16, 2021 By scottdm Leave a Comment

covid wrecking ballThe last two years have been difficult.  Whether through illness, death of loved ones, job loss, economic insecurity, or social isolation, few have escaped the consequences of the worldwide pandemic.

While government and media attention has been focused on physical health, rates of anxiety and depression have soared (1).  Younger people have been particularly impacted.  One recent study found half of those surveyed were at risk for major depression (2).  Such findings have been confirmed in a soon-to-be published study of data generated by practitioners using the ORS and SRS to monitor the care being provided to thousands of adults, adolescents, and children around the world.  Briefly, an international group of researchers and scholars found a trend toward increasing episodes of longer lengths of overall less effective care among adolescents and younger adults.

In the US, regardless of client age, therapists are struggling to keep up with demand for mental health services.  It’s not a new problem.  It’s one that has worsened significantly since the outbreak of COVID-19 (3, 4).

Before and after covid

In this environment of demand exceeding availability, it has become increasingly common for clinicians to both: (1) carry higher caseloads; and (2) deliver services on an attenuated schedule for returning clients.  Said another way, clinicians are seeing more people, but not the same ones from week to week.  Instead, the sessions of those already in care are being spaced further apart so that new intakes can be accommodated (5).

While such developments are entirely understandable in the present environment, the question is whether they constitute sound clinical practice?  And where therapeutic effectiveness is concerned, the answer is an unequivocal, “no.”   Despite quantity (e.g., number of sessions) not being correlated with outcome, studies do show a clear relationship between frequency and effectiveness — specifically, faster rates of change, improved chances of recovering sooner, and lower rates of attrition (6, 7, 8, 9).

Mindful of such findings, is there anything practitioners and agencies can do to more effectively balance demand with availability?  Here, research indicates the answer is, “yes.”

Over the last decade, data have been gathered about the progress of millions of clients via the routine administration of standardized outcome measures.  As readers of this already blog know, studies show using the resulting information to adjust services to better fit the individual (aka, Feedback-Informed Treatment) improves retention and outcome while also reducing costs (click here to access a spreadsheet containing a current list of studies).

Importantly, the very same data is providing therapists and clients with detailed, evidence-based guidance for optimizing the frequency, dose and intensity of services.  As all clinicians know, some clients need and benefit from more, other less.  With demand at historically high rates, being able to determine which is which enables practitioners to utilize their limited availability wisely, ensuring maximal improvement for each individual client (10).

Filed Under: Feedback Informed Treatment - FIT, FIT

Getting in the Deliberate Practice HABIT

July 22, 2021 By scottdm Leave a Comment

Type the words, “Old habits …” into Google, and the search engine quickly adds, “die hard” and “are hard to break.”  When I did it just now, these were followed by two song titles — one by Hank Williams Jr., the other by Mick Jagger — both dealing with letting go of past relationships.  Alas, in love and life, breaking up is hard to do.

Like it or not, and despite our best intentions, we often end up returning to what we know.  What are generally referred to as, “habits,” researchers in the field of expert performance label, “automaticity,” literally meaning thoughts and behaviors engaged in reflexively, involuntarily or unconsciously.

The evidence shows more than 40% of what we do on a daily basis is habitual in nature; that is, carried out while we’re thinking about something else (1).   While such data might generate concern for most — “that’s a lot of acting without thinking” — the expertise literature indicates its absolutely essential to improving performance.  Simply put, automaticity frees up our limited cognitive resources to focus on achieving performance objectives just beyond our current abilities — a process known as deliberate practice.

So, what’s your sense?  How long does it typically take for new behaviors to be executed without a high degree cognitive effort?

A. 14 days
B. 21 days
C. 36 days
D. 56 days
E. 66 days

Please jot down your answer before reading further.

Did you do it?

Now, before I provide a research-based answer, would you watch the video below?  (It’s fun, I promise)

Having watched the video, would you care to change your answer?  With a self-reported 5-minutes of practice per day, it took Destin 8 months to achieve automaticity on his “backwards bicycle.”  His experience is far from unique.  Turns out, most of us — like many of those in the video who confidently seated themselves on the bike, then failed — seriously underestimate the amount of time and effort required for establishing new, more effective habits.new and old habit

Somehow, somewhere, sometime, someone asserted the road to automaticity was about 21 days (3).  Research actually shows it takes, on average, three times as long and, in many instances, up to 8 months (2)!  Does that latter figure sound familiar?   Complicating such findings is the fact that many of the “habits” studied by researchers are relatively simple in nature (e.g., drinking a bottle of water with lunch, running 15 minutes after dinner).  Imagine a more complex behavior, such as learning to respond empathically to the diverse clients presenting for psychotherapy — and, just so you know, soon to be published research shows such abilities do not improve with experience nor correlate with clinicians’ estimates of their ability — and the challenge involved in improving clinical performance becomes even more apparent.

And did I mention the sense of failure, even incompetence, that frequently accompanies attempts to establish new habits?  It’s understandable why so many of our efforts to improve are short lived.  Frankly, its far easier to see oneself as getting better than to actually  do what’s necessary long enough to improve.  The evidence, reviewed previously on this blog, documents as much (4).

Better Results CoverIn our latest book, Better Results (APA, 2020), we identify a series of evidence-based steps for helping therapists develop a sustainable deliberate practice plan.  Known by the acronym A.R.P.S. (5), it includes:

  • Automated: If you are asking yourself when, you likely never will.
  • Reference point:  Count your steps, not your achievements.
  • Playful: Give in, let go, have fun.
  • Support: Go alone and you won’t go far

Following these steps, we’ve found, helps clinicians maintain their momentum as they apply deliberate practice in their professional development efforts.   To these, I add a precursor: Change your mindset.  Yeah, I know, that results in C.A.R.P.S, meaning “to find fault or complain querulously or unreasonably; be niggling in criticizing minor errors,” but that’s precisely the point.  Recalling that deliberate practice is about reaching for performance objectives just beyond our current abilities, think “small and continuous improvement” rather than “achieving proficiency and mastery.”

Filed Under: deliberate practice, excellence, Feedback Informed Treatment - FIT, FIT, Top Performance

Do We Learn from Our Clients? Yes, No, Maybe So …

March 2, 2021 By scottdm Leave a Comment

When it comes to professional development, we therapists are remarkably consistent in opinion about what matters.  Regardless of experience level, theoretical preference, professional discipline, or gender identity, large, longitudinal studies show “learning from clients” is considered the most important and influential contributor (1, 2).  Said another way, we believe clinical experience leads to better, increasingly effective performance in the consulting room.

As difficult as it may be to accept, the evidence shows we are wrong.  Confidence, proficiency, even knowledge about clinical practice, may improve with time and experience, but not our outcomes.  Indeed, the largest study ever published on the topic — 6500 clients treated by 170 practitioners whose results were tracked for up to 17 years — found the longer therapists were “in practice,” the less effective they became (3)!  Importantly, this result remained unchanged even after researchers controlled for several patient, caseload, and therapist-level characteristics known to have an impact effectiveness.

Only two interpretations are possible, neither of them particularly reassuring.  Either we are not learning from our clients, or what we claim to be learning doesn’t improve our ability to help them.  Just to be clear, the problem is not a lack of will.   Therapists, research shows, devote considerable time, effort, and resources to professional development efforts (4).  Rather, it appears the way we’ve approached the subject is suspect.

Consider the following provocative, but evidence-based idea.  Most of the time, there simply is nothing to learn from a particular client about how to improve our craft.  Why?  Because so much of what affects the outcome of individual clients at any given moment in care is random — that is, either outside of our direct control or not part of a recurring pattern of therapist errors.  Extratherapeutic factors, as influences are termed, contribute a whopping 87% to outcome of treatment (5, 6).   Let that sink in.

The temptation to draw connections between our actions and particular therapeutic results is both strong and understandable.  We want to improve.  To that end, the first step we take — just as we counsel clients — is to examine our own thoughts and actions in an attempt to extract lessons for the future.  That’s fine, unless no causal connection exists between what we think and do, and the outcomes that follow … then, we might as well add “rubbing a rabbit’s foot” to our professional development plans.

So, what can we to do?   Once more, the answer is as provocative as it is evidence-based.  Recognizing the large role randomness plays in the outcome of clinical work, therapists can achieve better results by improving their ability to respond in-the-moment to the individual and their unique and unpredictable set of circumstances.  Indeed, uber-researchers Stiles and Horvath note, research indicates, “Certain therapists are more effective than others … because [they are] appropriately responsive … providing each client with a different, individually tailored treatment” (7, p. 71).

FIT BookWhat does improving responsiveness look like in real world clinical practice?  In a word, “feedback.”  A clever study by Jeb Brown and Chris Cazauvielh found, for example, average therapists who were more engaged with the feedback their clients provided — as measured by the number of times they logged into a computerized data gathering program to view their results — in time became more effective than their less engaged peers (8).  How much more effective you ask?  Close to 30% — not a bad “return on investment” for asking clients to answer a handful of simple questions and then responding to the information they provide!

If you haven’t already done so, click here to access and begin using two, free, standardized tools for gathering feedback from clients.  Next, ioin our free, online community to get the support and inspiration you need to act effectively and creatively on the feedback your clients provide — hundreds and hundreds of dedicated therapists working in diverse settings around the world support each other daily on the forum and are available regardless of time zone.

And here’s a bonus.  Collecting feedback, in time, provides the very data therapists need to be able to sort random from non-random in their clinical work, to reliably identify when they need to respond and when a true opportunity for learning exists.  Have you heard or read anything about “deliberate practice?”  Since first introducing the term to the field in our 2007 article, Supershrinks, it’s become a hot topic among researchers and trainers.  If you haven’t yet, chances are you will soon be seeing books and videos offering to teach how to use deliberate practice for mastering any number of treatment methods.  The promise, of course, is better outcomes.  Critically, however, if training is not targeted directly to patterns of action or inaction that reliably impact the effectiveness of your individual clinical performance in negative ways, such efforts will, like clinical experience in general, make little difference.

If you are already using standardized tools to gather feedback from clients, you might be interested in joining me and my colleague Dr. Daryl Chow for upcoming, web-based workshop.  Delivered weekly in bite-sized bits, we’ll not only help you use your data to identify your specific learning edge, but work with you to develop an individualized deliberate practice plan.  You go at your own pace as access to the course and all training materials are available to you forever.  Interested?  Click here to read more or sign up.

Filed Under: Behavioral Health, deliberate practice, evidence-based practice, Feedback Informed Treatment - FIT, FIT

Making Sense of Client Feedback

January 4, 2021 By scottdm Leave a Comment

I have a guilty confession to make.  I really like Kitchen Nightmares.  Even though the show finished its run six L O N G years ago, I still watch it in re-runs.  The concept was simple.  Send one of the world’s best known chefs to save a failing restaurant.

Each week a new disaster establishment was featured.  A fair number were dives — dirty, disorganized messes with all the charm and quality of a gas station lavatory.  It wasn’t hard to figure out why these spots were in trouble.  Others, by contrast, were beautiful, high-end eateries whose difficulties were not immediately obvious.

Of course, I have no idea how much of what we viewers saw was real versus contrived.  Regardless, the answers owners gave whenever Ramsey asked for their assessment of the restaurant never failed to surprise and amuse.   I don’t recall a single episode where the owners readily acknowledged having any problems, other than the lack of customers!  In fact, most often they defended themselves, typically rating their fare “above average,” — a 7 or higher on a scale from 1 to 10.

Contrast the attitude of these restaurateurs with pop music icon Billy Joel.  When journalist Steve Croft asked him why he thought he’d been so successful, Joel at first balked, eventually answering, “Well, I have a theory, and it may sound a little like false humility, but … I actually just feel that I’m competent.”  Whether or not you are a fan of Joel’s sound, you have to admit the statement is remarkable.   He is one of the most successful music artists in modern history, inducted into the Rock and Roll Hall of Fame, winning a Grammy Legend Award, earning four number one albums on the Billboard 200, and consistently filling stadiums of adoring fans despite not having released a new album since 1993!  And yet, unlike those featured on Kitchen Nightmares, he sees himself as merely competent, adding “when .. you live in an age where there’s a lot of incompetence, it makes you appear extraordinary.”

Is humility associated with success?  Well, turns out, it is a quality possessed by highly effective effective therapists.  Studies not only confirm “professional self-doubt” is a strong predictor of both alliance and outcome in psychotherapy but actually a prerequisite for acquiring therapeutic expertise (1, 2).  To be clear, I’m not talking about debilitating diffidence or, as is popular in some therapeutic circles, knowingly adopting a “not-knowing” stance.  As researchers Hook, Watkins, Davis, and Owen describe, its about feedback — specifically, “valuing input from the other (or client) … and [a] willingness to engage in self-scrutiny.”

Low humility, research shows, is associated with compromised openness (3).  Sound familiar?  It is the most common reaction of owners featured on Kitchen Nightmares.  Season 5 contained two back-to-back episodes featuring Galleria 33, an Italian restaurant in Boston, Massachusetts.  As is typical, the show starts out with management expressing bewilderment about their failing business.  According to them, they’ve tried everything — redecorating, changing the menu, lowering prices.  Nothing has worked.  To the viewer, the problem is instantly obvious: they don’t take kindly to feedback.  When one customer complains their meal is “a little cold,” one of the owners becomes enraged.  She first argues with Ramsey, who agrees with the customer’s assessment, and then storms over to the table to confront the diner.  Under the guise of “just being curious and trying to understand,” she berates and humiliates them.  It’s positively cringeworthy.  After numerous similar complaints from other customers — and repeated, uncharacteristically calm, corrective feedback from Ramsey — the owner experiences a moment of uncertainty.  Looking directly into the camera she asks, “Am I in denial?”  The thought is quickly dismissed.  The real problem, she and the co-owner decide, is … (wait for it) …

Ramsey and their customers!   Is anyone surprised the restaurant didn’t survive?

Such dramatic examples aside, few therapists would dispute the importance of feedback in psychotherapy.  How do I know?  I’ve meet thousands over the last two decades as I traveled the world teaching about feedback-informed treatment (FIT).  Research on implementation indicates a far bigger challenge is making sense of the feedback one receives (4, 5, 6)  Yes, we can (and should) speak with the client — research shows therapists do that about 60% of the time when they receive negative feedback.  However, like an unhappy diner in an episode of Kitchen Nightmares, they may not know exactly what to do to fix the problem.  That’s where outside support and consultation can be critical.  Distressingly, research shows, even when clients are deteriorating, therapists consult with others (e.g., supervisors, colleagues, expert coaches) only 7% of time.

Since late summer, my colleagues and I at the International Center for Clinical Excellence have offered a series of intimate, virtual gatherings of mental health professionals.  Known as the FIT Cafe, the small group (10 max) gets together once a week to finesse their FIT-related skills and process client feedback.  It’s a combination of support, sharing, tips, strategizing, and individual consultation.  As frequent participant, psychologist Claire Wilde observes, “it has provided critical support for using the ORS and SRS to improve my therapeutic effectiveness with tricky cases, while also learning ways to use collected data to target areas for professional growth.”

Information about the series can be found here.  Not ready for such an “up close and personal” experience?  Please join the ICCE online discussion forum.  It’s free.  You can connect with knowledgeable and considerate colleagues working to implement FIT and deliberate practice in their clinical practice in diverse settings around the world.

Filed Under: deliberate practice, excellence, Feedback, Feedback Informed Treatment - FIT, FIT, Therapeutic Relationship

Getting Beyond the “Good Idea” Phase in Evidence-based Practice

July 9, 2020 By scottdm Leave a Comment

The year is 1846.  Hungarian-born physician Ignaz Semmelweis is in his first month of employment at Vienna General hospital when he notices a troublingly high death rate among women giving birth in the obstetrics ward.  Medical science at the time attributes the problem to “miasma,” an invisible, poison gas believed responsible for a variety of illnesses.

Semmelweis has a different idea.  Having noticed midwives at the hospital have a death rate six times lower than physicians, he concludes the prevailing theory cannot possibly be correct.  The final breakthrough comes when a male colleague dies after puncturing his finger while performing an autopsy.  Reasoning that contact with corpses is somehow implicated in the higher death rate among physicians, he orders all to wash their hands prior to interacting with patients.   The rest is, as they say, history.  In no time, the mortality rate on the maternity ward plummets, dropping to the same level as that of midwives.

Nowadays, of course, handwashing is considered a “best practice.”  Decades of research show it to be the single most effective way to prevent the spread of infections.  And yet, nearly 200 years after Semmewies’s life-saving discovery, compliance with hand hygiene among healthcare professionals remains shockingly low, with figures consistently ranging between 40 and 60% (1, 2).  Bottom line: a vast gulf exists between sound scientific practices and their implementation in real world settings.  Indeed, the evidence shows 70 to 95% of attempts to implement evidence-based strategies fail.

To the surprise of many, successful implementation depends less on disseminating “how to” information to practitioners than on establishing a culture supportive of new practices.  In one study of hand washing, for example, when Johns Hopkins Hospital administrators put policies and structures in place facilitating an open, collaborative, and transparent culture among healthcare staff (e.g., nurses, physicians, assistants), compliance rates soared and infections dropped to zero!

Feedback Informed Treatment (FIT) — soliciting and using formal client feedback to guide mental health service delivery — is another sound scientific practice.  Scores of randomized clinical trials and naturalistic studies show it improves outcomes while simultaneously reducing drop out and deterioration rates.  And while literally hundreds of thousands of practitioners and agencies have downloaded the Outcome and Session Rating Scales — my two, brief, feedback tools — since they were developed nearly 20 years ago, I know most will struggle to put them into practice in a consistent and effective way.

To be clear, the problem has nothing to do with motivation or training.  Most are enthusiastic to start.  Many invest significant time and money in training.  Rather, just as with hand washing, the real challenge is creating the open, collaborative, and transparent workplace culture necessary to sustain FIT in daily practice.  What exactly does such a culture look like and what actions can practitioners, supervisors, and managers take to facilitate its development?  That’s the subject of our latest “how to” video by ICCE Certified Trainer, Stacy Bancroft.  It’s packed with practical strategies tested in real world clinical settings.

By the way, want to interact with FIT Practitioners around the world?  Join the conversation here.

Filed Under: evidence-based practice, Feedback Informed Treatment - FIT, FIT, Implementation

Questions and Answers about Feedback Informed Treatment and Deliberate Practice: Another COVID-19 Resource

April 16, 2020 By scottdm 4 Comments

Since they were developed and tested back in the late 90’s, the Outcome and Session Rating Scales have been downloaded by practitioners more than 100,000 times!  Judging by the number of cases entered into the three authorized software applications, the tools have been used inform service delivery for millions of clients seeking care for different problems in diverse treatment settings.  The number of books, manuals, and “how to” videos describing how to use the tools has continued to grow dramatically.

Here is one more option for support: a recording of a live webinar discussing FIT and deliberate practice with professionals from around the world.  I think you’ll be surprised by the depth and breadth of the information covered.  You can listen to the entire broadcast or use the guide below to jump directly to the questions that matter most to you. 

  1. How to get started with FIT? (2:23)
  2. How can I encourage my clients to provide open, honest feedback? (10:30; revisited 36:15)
  3. Should I start using the measures with established clients? (13:18, revisited 17:05)
  4. How do I know how effective I am? (14:45)
  5. How to interpret ORS and SRS feedback (18:10)
  6. How to use the scales online/on the phone? (22:00)
  7. How effective is supervision? (26:58)
  8. How to work with mandated clients? (31:30)
  9. Why do some clients not give feedback? (37:00)
  10. What is deliberate practice and how to apply it for improving therapist effectiveness? (46:00)

Filed Under: deliberate practice, Feedback Informed Treatment - FIT, FIT, FIT Software Tools, ICCE, Implementation

Far from Normal: More Resources for Feedback Informed Treatment in the Time of COVID-19

March 31, 2020 By scottdm 4 Comments

covid wrecking ballI hope this post finds you, your loved ones, and colleagues, safe and healthy.

What an amazing few weeks this has been.  Daily life, as most of us know it, has been turned upside down.  The clinicians I’ve spoken with are working frantically to adjust to the new reality, including staying abreast of rapidly evolving healthcare regulation and learning how to provide services online.

I cannot think of a time in recent memory when the need to adapt has more pressing.  As everyone knows, feedback plays a crucial role in this process.

Last week, I reported a surge in downloads of the Outcome and Session Rating Scales (ORS & SRS), up 21% over the preceding three months.  Independent, randomized controlled trials document clients are two and a half times more likely to benefit from therapy when their feedback is solicited via the measures and used to inform care.   Good news, eh? Practitioners are looking for methods to enhance their work in these new and challenging circumstances.   Only problem is the same research shows it takes time to learn to use the measures effectively — and that’s under the best or, at least, most normal of circumstances!

Given that we are far from normal, the team at the International Center for Clinical Excellence, in combination with longtime technology and continuing education partners, have been working to provide the resources necessary for practitioners to make the leap to online services.  In my prior post, a number of tips were shared, including empirically-validating scripts for oral administration of the ORS and SRS as well as instructional videos for texting, email, and online use via the three, authorized FIT software platforms.

We are not done.  Below, you will find two, new instructional videos from ICCE Certified Trainers, Stacy Bancroft and Brooke Mathewes.  They provide step-by-step instructions and examples of how to administer the measures orally —  a useful skill if you are providing services online or via the telephone.


Filed Under: Feedback Informed Treatment - FIT, FIT

Please, don’t use my scales…

December 12, 2019 By scottdm 3 Comments

Or, at least that’s what I said in response to his question.  The look on his face made clear my words caused more confusion than clarity.

“But then, how will I found out which of the therapists at my agency are effective?” he asked.

“The purpose of FIT,” I replied, “is not to profile, but rather help clinicians respond more effectively to their clients.”

And I’ve found myself giving similar advise of late —  in particular, actively counseling practitioners and clinic directors against using the ORS and SRS.

Here’s another:

“We need a way to meet the new Joint Comission/SAMHSA requirement to use a standardized outcome measure in all therapeutic work.”

My reply?

FIT is purposefully designed — and a significant body of evidence indicates it does — help those in treatment achieve the best results possible.  Thus, while integrating measures into care has, in some countries, because a standard of care, using them merely to meet regulatory requirements is de facto unethical.  Please don’t use my scales.

One more?

“I don’t (or won’t) use the scales with all my clients, just those I decide it will be clinically useful with.”

What do I think?

The evidence clearly shows clinicians often believe they are effective or aligned with clients when they are not.  The whole purpose of routinely using outcome and alliance measures is to fill in these gaps in clinical judgement.  Please don’t use my scales. 

Last, as I recently blogged about, “The scales are really very simple and self-explanatory so I don’t think we really need much in the way of training or support materials.”

My response?

We have substantial evidence to the contrary.  In sharp contrast to the mere minutes involved in downloading and learning to administer measures, actual implemention of FIT takes considerable time and support —  more than most seem aware of or willing to invest.

PLEASE DON’T USE MY SCALES!

While I could cite many more examples of when not to use routine outcome measures (e.g., “we need a way to identify clients we aren’t helping so we can terminate services with them and free up scarce clinical resources” or “I want to have data to provide evidence of effectiveness to funding sources”) — I will refrain.

As one dedicated FIT practitioner recently wrote, “Using FIT is brutal. Without it, it’s the patients’ fault. With fit, it’s mine. Grit your way through . . . because it’s good and right.”

I could not have said it any better.

Filed Under: Feedback, Feedback Informed Treatment - FIT, FIT

It’s Time to Abandon the “Mean” in Psychotherapy Practice and Research

April 8, 2019 By scottdm 6 Comments

car seatRecognize this?  Yours will likely look at bit different.  If you drive an expensive car, it may be motorized, with buttons automatically set to your preferences.  All, however, serve the same purpose.

Got it?

It’s the lever for adjusting your car seat.

I’m betting you’re not impressed.   Believe it or not though, this little device was once considered an amazing innovation — a piece of equipment so disruptive manufacturers balked at producing it, citing “engineering challenges” and fear of cost overruns.

For decades, seats in cars came in a fixed position.  You could not move them forward or back.  For that matter, the same was the case with seats in the cockpits of airplanes.  The result?  Many dead drivers and pilots.

The military actually spent loads of time and money during the 1940’s and 50’s looking for the source of the problem.  Why, they wondered, were so many planes crashing?  Investigators were baffled.

Every detail was checked and rechecked.  Electronic and mechanical systems tested out.  Pilot training was reviewed and deemed exceptional.  Systematic review of accidents ruled out human error.   Finally, the equipment was examined.  Nothing, it was determined, could not have been more carefully designed — the size and shape of the seat, distance to the controls, even the shape of the helmet, were based on measurements of 140 dimensions of 4,000 pilots (e.g., thumb length, hand size, waist circumference, crotch height, distance from eye to ear, etc.).

It was not until a young lieutenant, Gilbert S. Daniels, intervened that the problem was solved.  Turns out, despite of the careful measurements, no pilot fit the average of the various dimensions used to design the cockpit and flight equipment.  Indeed, his study found, even when “the average” was defined as the middle 30 percent of the range of values on any given indice, no actual pilot fell within the range!

The conclusion was as obvious as it was radical.  Instead of fitting pilot into planes, planes needed to be designed to fit pilots.  Voila!   The adjustable seat was born.

Now, before you scoff — wisecracking, perhaps, about “military intelligence” being the worst kind of oxymoron — beware.  The very same “averagarianism” that gripped leaders and engineers in the armed services is still in full swing today in the field of mental health.

Perhaps the best example is the randomized controlled trial (RCT) — deemed the “gold standard” for identifying “best practices” by professional bodies, research scientists, and governmental regulatory bodies.

However sophisticated the statistical procedures may appear to the non-mathematically inclined, they are nothing more than mean comparisons.

Briefly, participants are recruited and then randomly assigned to one of two groups (e.g., Treatment A or a Control group; Treatment A or Treatment as Usual; and more rarely, Treatment A versus Treatment B).  A measure of some kind is administered to everyone in both groups at the beginning and the end of the study.   Should the mean response of one group prove statistically greater than the other, that particular treatment is deemed “empirically supported” and recommended for all.

The flaw in this logic is hopefully obvious: no individual fits the average.  More, as any researcher will tell you, the variability between individuals within groups is most often greater than variability between groups being compared.

in boxBottom line:  instead of fitting people into treatments, mental health care should be to made to fit the person.  Doing so is referred to, in the psychotherapy outcome literature, as responsiveness  — that is, “doing the right thing at the right time with the right person.”  And while the subject receives far less attention in professional discourse and practice than diagnostic-specific treatment packages, evidence indicates it accounts for why, “certain therapists are more effective than others…” (p. 71, Stiles & Horvath, 2017). 

I’m guessing you’ll agree it’s time for the field to make an “adjustment lever” a core standard of therapeutic practice — I’ll bet it’s what you try to do with the people you care for anyway.on box

Turns out, a method exists that can aid in our efforts to adjust services to the individual client.  It involves routinely and formally soliciting feedback from the people we treat.  That said, not all feedback is created equal.  With a few notable exceptions, all routine outcome monitoring systems (ROM) in use today suffer from the same problem that dogs the rest of the field.  In particular, all generate feedback by comparing the individual client to an index of change based on an average of a large sample (e.g., reliable change index, median response of an entire sample).

By contrast, three computerized outcome monitoring systems use cutting edge technology to provide feedback about progress and the quality of the therapeutic alliance unique to the individual client.  Together, they represent a small step in providing an evidence-based alternative to the “mean” approaches traditionally used in psychotherapy practice and research.

Filed Under: Behavioral Health, evidence-based practice, Feedback, Feedback Informed Treatment - FIT, FIT, FIT Software Tools

Routine Outcome Monitoring and Deliberate Practice: Fad or Phenomenon?

March 26, 2019 By scottdm 1 Comment

Would you believe me if I told you there was a way you could more than double the chances of helping your clients?  Probably not, eh?  As I’ve documented previously, claims abound regaring new methods for improving the outcome of psychotherapy.  It’s easy to grow cynical.

And yet, findings from a recent study document when clinicians add this particular practice to their clinical work, clients are actually 2.5 times more likely to improve.  The impact is so significant, a review of research emerging from a task force of the American Psychological Association concluded, “it is among the most effective ways available to services to improve outcomes.”feedback effects

That said, there’s a catch.

The simple nature of this “highly rated,” transtheoretical method belies a steep learning curve.  In truth, experience shows you can learn  to do it — the mechanics — in a few minutes.

But therein lies the problem.  The empirical evidence makes clear successful implementation often takes several years.  This latter fact explains, in part, why surveys of American, Canadian, and Australian practitioners reveal that, while being aware of the method, they rarely integrate it into their work.

What exactly is the “it” being referred to?

Known by the acronym FIT,  feedback-informed treatment (FIT) involves using standardized measures to formally and routinely solicit feedback from clients regarding progress and the quality of the therapeutic relationship, and then using the resulting information to inform and improve care.

The ORS and SRS are examples of two simple feedback scales used in more than a dozen randomized controlled trials as well as vetted and deemed “evidence-based” by the Substance Abuse and Mental Health Services Administration.  Together, the forms take less than 3 minutes to administer, score and interpret (less if one of the web-based scoring systems is used).

So why, you might wonder, would it take so long to put such tools into practice?

As paradoxical as it may sound, because FIT is really not about using measures — any more say than making a home is about erecting four walls and a roof.  While the structure is the most visible aspect — a symbol or representation — we all know it’s what’s inside that counts; namely, the people and their relationships.

On this score, it should come as no surprise that a newly released study has found a significant portion of the impact of FIT is brought about by the alliance or relationship between client and therapist.   It’s the first study in history to look at how the process actually works and I’m proud to have been involved.

Of course, all practitioners know relationships skills are not only central to effective psychotherapy, but require lifelong learning.   With time, and the right kind of support, using measurement tools facilitates both responsiveness to individual clients and continuous professional development.

Here’s the rub.  Whenever I respond to inquiries about the tools — in particular, suggesting it takes time for the effects to manifest, and that the biggest benefit lies beyond the measurement of alliance and outcome — interest in FIT almost always disappears.  “We already know how to do therapy,” a manager  replied just over a week ago, “We only want the measures, and we like yours because they are the simplest and fastest to administer.”

Every so often, however, the reply is different.  “What do we have to do to make this work to improve the effectiveness of our clinical work and clinicians?” asked Thomas Haastrup, the Coordinator of Family Services for Odense Municipality in Denmark.  When I advised, planning and patience, with an emphasis on helping individual practitioners learn to use feedback to foster professional development versus simply measuring their results, he followed through.  “We adopted the long view,” Thomas recounts, “and it’s paid off.”  Now in their 5th year, outcomes are improving at both the program and provider level across services aimed at helping adults, children, and families.

Filed Under: evidence-based practice, excellence, Feedback, Feedback Informed Treatment - FIT, FIT

Better Results through Deliberate Practice

January 16, 2018 By scottdm Leave a Comment

better results

The legendary cellist Pablo Casals was once interviewed by comedian George Carlin.  When asked why, at age 93, he continued to practice three hours a day, Casals replied, “I’m beginning to show some improvement!”

Hard not to feel inspired and humbled by such dedication, eh?  And while humorous, Casals was not joking.  Across a wide variety of domains (e.g., sports, computer programming, teaching), deliberate practice leads to better results.   Indeed, our recent study of mental health practitioners documented a growth in effectiveness consistent with performance improvements obtained by elite atheletes.

practice makes perfectThe January 2018 issue of the APA monitor includes a detailed article on the subject.   Staff writer Tori DeAngelis lays out the process of applying deliberate practice strategies to clinical work in clear, step-by-step terms.  Best of all, it’s free–even continuing education credits are available if you need them.

Filed Under: Behavioral Health, deliberate practice, excellence, Feedback, Feedback Informed Treatment - FIT, FIT, Top Performance

Something BIG is Happening: The Demand for Routine Outcome Measurement from Funders

October 16, 2017 By scottdm 2 Comments

Something is happening.  Something big.

Downloads of the Outcome and Session Rating Scales have skyrocketed.

The number of emails I receive has been steadily increasing.

The subject?  Routine outcome measurement.  The questions:

  • Where can I get copies of your measures?person asking question

Paper and pencil versions are available on my website.

  • What is the cost?

Individual practitioners can access the tools for free.  Group licenses are available for agencies and healthcare systems.

  • Can we incorporate the tools into our electronic healthcare record (E.H.R.)?

Three companies are licensed and authorized to provide an “Application Program Interface” (or API) for integrating the ORS, SRS, data aggregation formulas, and feedback signals directly into your E.H.R.  Detailed information and contact forms are available in a special page on my website.

  • What evidence is available for the validity, reliability, and effectiveness of the measures?

Always a good question!  Since the tools were published seventeen years ago, studies have multiplied.  Keeping up with the data can be challenging as the tools are being used in different settings and with diverse clinical populations around the world.

Each year, together with my colleague, New Zealand psychologist, Eeuwe Schuckard, we add the latest research to a comprehensive document available for free online, titled “Measures and Feedback.”

Additionally, the tools have been vetted by an independent group of research scientists and are listed on the Substance Abuse and Mental Health Administration’s National Registry of Evidence-based Programs and Practices.

  • How can I (or my agency) get started?

Although it may sound simple and straightforward, this is the hardest question to answer.  There is often a tone of urgency in the emails I receive, “We need to measure outcomes now,” they say.

I nearly always respond with the same advice: the fastest way to succeed is to go slow.

We’ve learned a great deal about implementation over the last 10 years.  Getting practitioners to administer outcome measures is easy.  I can teach them how in less than three minutes.  Making the process more than just another, dreary “administrative task” takes time, patience, and persistence.

I caution against purchasing licenses, software, or onsite training.  Instead, I recommend taking time to explore.  It’s why the reviewers at SAMHSA gave our application for evidence-based status the highest ratings on “implementation support.”

ICCE ImplementationTo succeed, start with:

  1. Accessing a copy of the ICCE Feedback Informed Treatment Manual–the single, most comprehensive resource available on using the ORS and SRS.  Read and discuss them together with colleagues.
  2. Connect with practitioners and agencies around the world who have already implemented.  It’s easy.  Join the International Center for Clinical Excellence–the world’s largest online community dedicated to routine outcome measurement.
  3. Send a few key staff–managers, supervisors, implementation team leaders–to the Feedback-Informed Treatment Intensives.   The Advanced and Supervision workshops are held back-to-back each March in Chicago.  Participants not only leave with a thorough understanding of the ORS and SRS, but ready to kick off a successful implementation at home.  I tell people to sign up early as the courses are limited to 35 participants and always sell out a couple of months in advance.

Filed Under: Behavioral Health, evidence-based practice, Feedback Informed Treatment - FIT, FIT, FIT Software Tools, Implementation, PCOMS

The Illness and the Cure: Two Free, Evidence-based Resources for What Ails and Can Heal Serious Psychological Distress

April 18, 2017 By scottdm 14 Comments

Findings from several recent studies are sobering. Depression is now the leading cause of ill-health and disability worldwide–more than cancer, heart disease, respiratory problems, and accidents.  Yesterday, researchers reported that serious psychological distress is at an all-time high, significantly affecting not only quality but actual life expectancy.  And who has not heard about the opioid crisis?

The research is clear:  psychotherapy helps.  Indeed, its effectiveness is on par with coronary artery bypass surgery.  Despite such results, availability of mental health services in the U.S. and other Westernized nations has seriously eroded over the last decade.   Additionally, modern clinical practice is beset by regulation and paperwork, much of which gets in the way of treatment’s most important healing ingredient: the relationship.

What can practitioners do?

Completing paperwork together with clients during the visit–a process termed, “collaborative (or concurrent) documentation”–has been shown to save full-time practitioners between 6 and 8 hours per week, thereby improving capacity up to 20%.

It’s a great idea: completing assessments, treatment plans, and progress notes together with clients during rather than after the session. Unfortunately, it’s chief selling point to date seems to be that it saves time on documentation–as though filling out paperwork is an end in and of itself!  Clearly, the real challenges facing mental health services are getting people into and keeping them in care.   Here, the research literature is clear, people are more likely to stay engaged in care that is: (1) organized around their goals; and (2) works.  Collaborating on and coming to a consensus regarding the goals for treatment, for example, has the largest impact on outcome among all of the relationship factors in psychotherapy, including empathy!  Additionally, when documentation FITs the clients’ view of the process and is deemed transparent and respectful, trust–another essential ingredient of the therapeutic relationship–improves.

For the last several years, practitioners and agencies around the world have been using the ICCE “Service Delivery Agreement” and “Progress Note” as part of their documentation of clinical services.  Both were specifically designed to be completed collaboratively with clients at the time the service is provided and both are focused on documenting what matters to people in treatment.  Most important of all, however, both are part of an evidence-based process documented to improve engagement and effectiveness listed on SAMHSA’s National Registry of Evidence-based Programs and Practices.

If you’d like a copies for yourself, just email me at scottdmiller@talkingcure.com. 

Filed Under: Behavioral Health, CDOI, Conferences and Training, excellence, Feedback Informed Treatment - FIT, FIT, Implementation

Does practice make perfect?

August 30, 2016 By scottdm 1 Comment

michael ammart“Practice does not make perfect,” my friend, and award-winning magician, Michael Ammar, is fond of saying.  “Rather,” he observes, “practice makes permanent.”

Thus, if we are not getting better as we work, our work will simply insure our current performance stays the same.

Now, before reading any further, watch a bit of the video below.  It features Diana Damrau singing one of the most recognizable arias from Mozart’s, “The Magic Flute.”  Trust me, even if you don’t like opera, this performance will make the hair on your neck stand on end.

All right, now click on the video below (and listen for as long as you can stand it).

No, the latter recording is not a joke.  Neither is it a reject from one of the “GOT TALENT” shows so popular on TV at present.  It’s none other than Florence Jenkins—an American socialite and heiress who was, according to Wikipedia, “a prominent musical cult figure…during the 1920’s, ‘30’s, and 40’s.”

Florence Jenkins

How could that be, you may well wonder?  Her pitch is off, and there are so many mistakes in terms of rhythm, tempo, and phrasing in the first 30 seconds, one quickly loses count.

The problem?  In a word, feedback—more specifically, the lack of critical feedback extending over many years.

For most of her career, Lady Florence, as she liked to be called, performed to “select audiences” in her home or small clubs. Attendance was invitation-only–and Jenkins controlled the list.  Her guests did their best not to let on what they tought of her abilities.  Instead, they smiled approvingly and applauded–loudly as it turns out, in an attempt to cover the laughter that invariably accompanied her singing!

Jenkins performanceEverything changed in 1944 when Jenkins booked Carnegie Hall for a public performance. This time, the applause was not sufficient to cover the laughter.  If anything, it followed, treating the performance as a comedy act, and encouraging the singer to continue the frivolity.

The reviews were scathing.  The next morning, the critic for the New York Sun, wrote, Lady Florence, “…can sing everything…except notes…”

The moral of the story?  Practice is not enough.  To improve, feedback is required.  Honest feedback–and the earlier in the process, the better. Research indicates the keys to success are: (1) identifying performance objectives that lie just beyond an individuals current level of reliable achievement; (2) immediate feedback; and (3) continuous effort aimed at gradually refining and improving one’s performance.

Here’s the parallel with psychotherapy: the evidence shows therapist self-appraisal is not a reliable measure of either the quality or effectiveness of their work.  Indeed, a number of studies have found that, when asked, the least effective clinicians rate themselves on par with the most effective–a finding that could well be labelled, “Jenkin’s Paradox.”

Evidence-based measures exists which can help therapists avoid the bias inherent in self-assessment as well as aid in the identification of small, achievable performance improvement objectives.  Studies document, for example, how therapists can use such tools, in combination with immediate feedback and practice, to gradually yet significantly improve the quality and effectiveness of their therapeutic relationships–arguably, the most important contributor to treatment outcome.

Let me leave you with one last video.  It’s an interview I did with Danish psychologist Susanne Bargmann. Over the last 5 years, she’s applied the principles described here in an attempt to not only improve her effectiveness as a clinician, but also in music.  Recently, her efforts came to the attention of the folks at Freakonomics radio.  As was the case when you listened to Diana Damrau, you’ll come away inspired!

Filed Under: CDOI, evidence-based practice, Feedback Informed Treatment - FIT, FIT, Top Performance

Improving the Odds: Implementing FIT in Care for Problem Gamblers and their Families

April 17, 2016 By scottdm 1 Comment

Quick Healthcare Quiz

What problem in the U.S. costs the government approximately $274 per adult annually?

If you guessed gambling, give yourself one point.  According to the latest research, nearly 6 million Americans have a serious gaming problem—a number that is on the rise.  One-third of the Nation’s adults visit a Casino every year, losing according to the latest figures an estimated 100 billion dollars.

Which problem is more common?  Substance abuse or problem gambling?

If you guessed the former, give yourself another point.  Problems related to alcohol and drug use are about 3.5 times more common than gambling.  At the same time, 281 times more funding is devoted to treating drug and alcohol problems.  In March 2014, the National Council on Problem Gambling reported that government-funded treatment was provided to less than one quarter of one percent of those in need.

Does psychotherapy work for problem gambling?

If you answered “yes,” add one to your score.  Research not only indicates that psychological treatment approaches are effective, but that changes are maintained at follow up.  As with other presenting problems (e.g., anxiety and depression), more therapy is associated with better outcomes than less.

What is the key to successful treatment of problem gambling?

If you answered, “funding and getting people into treatment,” or some variation thereof, take away three points!

So, how many points do you have left?  If you are at or near zero, join the club.

Healthcare is obsessed with treatment.  A staggering 99% of resources are invested in interventions.  Said another way, practitioners and healthcare systems love solutions.  The problem is that research shows this investment, “does not result in positive implementation outcomes (changes in practitioner behavior) or intervention outcomes (benefits to consumers).”  Simply put, it’s not enough to know “what works.”  You have to be able to put “what works” to work.

BCRPGP

Enter the BC Responsible and Problem Gambling Program—an agency that provides free support and treatment services aimed at reducing and preventing the harmful impacts of excessive or uncontrolled gaming.  Clinicians working for the program not only sought to provide cutting-edge services, they wanted to know if they were effective and what they could do to continuously improve.

Five years ago, the organization adopted feedback-informed treatment (FIT)—routinely and formally seeking feedback from clients regarding the quality and outcome of services offered.    A host of studies documents that FIT improves retention in and outcome of psychotherapy.  Like all good ideas, however, the challenge of FIT is implementation.

Last week, I interviewed Michael Koo, the clinical coordinator of the BCRPGP.  Listen in as he discusses the principles and challenges of their successful implementation.  Learn also how the talented and devoted crew achieve outcomes on par with randomized controlled trials in an average of 7 visits while working with a culturally and clinically diverse clientele.

As you’ll hear, implementation is difficult, but doable.  More, you don’t have to reinvent the wheel or do it alone.  When FIT was reviewed and deemed “evidence-based” by the Substance Abuse and Mental Health Services organization in 2013, it received perfect scores for “implementation, training, support, and quality assurance” resources.

Filed Under: Behavioral Health, Conferences and Training, evidence-based practice, Feedback Informed Treatment - FIT, FIT, ICCE

What is the essential quality of effective Feedback? New research points the way

February 8, 2016 By scottdm 1 Comment

“We should not try to design a better world,” says Owen Barder, senior fellow at the Center for Global Development, “We should make better feedback loops.”

Feedback has become a bit of a buzzword in mental health.  Therapists are being asked to use formal measures of progress and the quality of the relationship and use the resulting information to improve effectiveness.

As it turns out, not all feedback is created alike.  The key to success is obtaining information that gives rise to increased consciousness—the type that causes one to pause, reflect, rethink.  In a word, negative feedback.

Nearly a decade ago, we noticed a curious relationship between effectiveness and the therapeutic alliance.  Relationships that started off poorly but improved were nearly 50% more effective than those rated good throughout.

Additional evidence comes from a real-world study of therapy with adolescents (Owen, Miller, Seidel, & Chow, 2016).  Therapists asked for and received feedback via the Outcome and Session Rating scales at each and every visit.  Once again, relationships that improved over the course of treatment were significantly more effective.

Importantly, obtaining lower scores at the outset of therapy provides clinicians with an opportunity to discuss and address problems early in the working relationship.  But, how best to solicit such information?

The evidence documents that using a formal measure is essential, but not enough.  The most effective clinicians work hard at creating an environment that not invites, but actively utilizes feedback.  Additionally, they are particularly skilled at asking questions that go beyond platitudes and generalities, in the process transforming client experience into specific steps for improving treatment.

As statistician and engineer Edward Deming once observed, “If you do not know how to ask the right question, you discover nothing.”

Little useful information is generated when clients are asked, “How did you feel about the session today?” “Did you feel like I (listened to/understood) you?” or “What can I do better?”

The best questions are:

  • Specific rather than general;
  • Descriptive rather than evaluative;
  • Concerned with quantities rather than qualities; and are
  • Task rather than person-oriented.

Over the years, we’ve come to understand that learning to ask the “right” question takes both time and practice.  It’s not part of most training programs, and it only comes naturally to a few.  As a result, many therapists who start using formal measures to solicit feedback about progress and the therapeutic relationship, give up, frustrated in their efforts to solicit helpful feedback.

Learn more from these free articles.

Filed Under: Feedback, Feedback Informed Treatment - FIT, FIT, Top Performance

Are you Better? Improving Effectiveness One Therapist at a Time

January 24, 2016 By scottdm 3 Comments

IMG_20160121_122453Greetings from snowy Sweden.  I’m in the beautiful city of Gothenburg this week, working with therapists and administrators on implementing Feedback-Informed Treatment (FIT).

I’m always impressed by the dedication of those who attend the intensive workshops.  More, I feel responsible for providing a training that not only results in mastery of the material, but also leads to better outcomes.

As commonsensical as it may seem to expect that training should foster better results, it’s not.  Consider a recent study out of the United Kingdom.  There, massive amounts of money have been spent over the last five years training clinicians to use cognitive behavioral therapy (CBT).  The expenditure is part of a well-intentioned government program aimed at improving access to effective mental health services.

Anyway, in the study, clinicians participated in a year-long “high-intensity” course that included more than 300 hours of training, supervision, and practice—a tremendous investment of time, money, and resources.  Competency in delivering CBT was assessed at regular intervals and shown to improve significantly throughout the training.

The only problem?  Training therapists in CBT did not result in better outcomes.

While one might hope such findings would cause the researchers to rethink the training program, they chose instead to question whether “patient outcome should … be used as a metric of competence…” (p. 27).  Said another way, doing treatment the right way was more important than whether it actually worked!  One is left to wonder whether the researchers would have reached a similar conclusion had the study gone the other way.  Most certainly, the headline would then have been, “Empirical Research Establishes Connection between Competence in CBT and Treatment Outcome!”

Attempts to improve the effectiveness of treatment via the creation of a psychological formulary—official lists of specific treatments for specific disorders—have persisted, and even intensified, despite consistent evidence that the methods clinicians use contribute little to outcome.  Indeed, neither clinicians’ competence in conducting specific types of therapy nor adherence to evidence-based protocols have been “found to be related to patient outcome and indeed . . . estimates of their effects [are] very close to zero” (p. 207, Webb, DeRubeis, & Barber, 2010).

So, what gives?

There are two reasons why such efforts have failed:

  • First, they do not focus on helping therapists develop the skills that account for the lion’s share of variability in treatment outcome.

Empathy, for example, has a greater impact than the combined effect sizes of therapist competence, adherence to protocol, specific ingredients within and differences between various treatment approaches.  Still, most, like the present study, continue to focus on method.

  • Second, they ignore the extensive scientific literature on expertise and expert performance.

Here, research has identified a universal set of processes, and step-by-step directions, anyone can follow to improve performance within a particular discipline.  To improve, training must be highly individualized, focused on helping performers reach for objectives just beyond their current ability.

“Deliberate Practice,” as it has been termed, requires grit and determination.  “Nobody is allowed to stagnate,” said one clinician when asked to describe what it was like to work at a clinic that had implemented the steps, adding, “Nobody is allowed to stay put in their comfort zone.”  The therapist works at Stangehjelpa, a community mental health service located an hour north of Oslo, Norway.

BirgitvidereThe director of the agency is psychologist, Birgit Valla (left), author of visionary book, Further: How Mental Services Can Be Better.   Birgit is on a mission to improve outcomes—not by dictating the methods staff are allowed to use but by focusing on their individual development.

It starts with measuring outcomes.  All therapists at Stangehjelpa know exactly how effective they are and, more importantly, when they are not helpful.  “It’s not about the measures,” Birgit is quick to point out, “It´s about the therapist, and how the service can support that therapist getting better.”  She continues, “It´s like if you want improve your time in the 100 meter race, you need a stopwatch.  It would be absurd to think, however, that the stopwatch is responsible for running faster.  Rather, it’s how one chooses to practice in relation to the results.”

Recently, researcher Siri Vikrem Austdal interviewed staff members at the clinic about their experience applying deliberate practice in their work.  Says one, ““It is strenuous. You are expected to deliver all the time. But being part of a team that dare to have new thoughts, and that wants something, is really exciting. I need it, or I would grow tired. It is demanding, but then there is that feeling we experience when we have climbed a mountain top. Then it is all worthwhile. It is incredibly fun to make new discoveries and experience mastery.”

So, what exactly are they doing at Stangehjelp?

You can read the entire report here (Norwegian), or the abbreviated version here (English).

Filed Under: Behavioral Health, evidence-based practice, Feedback Informed Treatment - FIT, FIT, ICCE, Top Performance

Swedish National Audit Office concludes: When all you have is CBT, mental health suffers

November 10, 2015 By scottdm 15 Comments

“The One-Sided Focus on CBT is Damaging Swedish Mental Health”

That’s the headline from one of Sweden’s largest daily newspapers for Monday, November 9th.  Professor Gunnar Bohman, together with colleagues and psychotherapists, Eva Mari Eneroth Säll and Marie-Louise Ögren, were responding to a report released last week by the Swedish National Audit Office (NAO).

In a prior post, I wrote about Sweden’s massive investment in cognitive behavioral therapy (CBT).  The idea was simple: address rising rates of disability due to mental illness by training clinicians in CBT.  At the time, a mere two billion Swedish crowns had been spent.

Now, several years and nearly 7 billion Crowns later, the NAO audited the program.  Briefly, it found:

  •  The widespread adoption of the method had no effect whatsoever on the outcome of people disabled by depression and anxiety;
  • A significant number of people who were not disabled at the time they were treated with CBT became disabled thereby increasing the amount of time they spent on disability; and 
  • Nearly a quarter of people treated with CBT dropped out.

The Swedish NAO concludes, “Steering towards specific treatment methods has been ineffective in achieving the objective.”

How, you might reasonably ask, could anyone think that restricting choice would improve outcomes?  It was 1966, when psychologist Abraham Maslow famously observed, “I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail” (p. 15, The Psychology of Science).  Still, many countries and professional organizations are charting a similar path today.

The choice is baffling, given the lack of evidence for differential efficacy among psychotherapeutic approaches. Consider a study I blogged about in April 2013.  It was conducted in Sweden at 13 different public health outpatient clinics over a three year period.  Consistent with 40 years of evidence, the researchers found that psychotherapy was remarkably effective regardless of the type of treatment offered!

So, what is the key to improving outcome?

As Bohman, Säll and Ögren point out in their article in Svenska Dagbladet, “offering choice…on the basis of patients’ problems, preferences and needs.”

The NAO report makes one additional recommendation: systematic measurement and follow-up.

As readers of this blog know, insuring that services both fit the consumer and are effective is what Feedback-Informed Treatment (FIT) is all about.  More than 20 randomized clinical trials show that this transtheoretical process improves retention and outcome.  More, in 2013, FIT was deemed evidence-based by the Substance Abuse and Mental Health Services Administration.

Filed Under: Behavioral Health, evidence-based practice, Feedback, Feedback Informed Treatment - FIT, FIT

Do Psychotherapists Improve with Time and Experience?

October 27, 2015 By scottdm 14 Comments

The practice known as “routine outcome measurement,” or ROM, is resulting in the publication of some of the biggest and most clinically relevant psychotherapy studies in history.  Freed from the limits of the randomized clinical trial, and accompanying obsession with manuals and methods, researchers are finally able to examine what happens in real world clinical practice.

I’ve previously blogged about one of the largest studies of psychotherapy ever published.  More than 1,400 therapists participated.  The progress of over 26,000 people (aged 16-95) treated over a 12 year period in primary care settings in the UK was tracked on an ongoing basis via ROM.  The results?  In an average of 8 visits, 60% of those treated by this diverse group of practitioners achieved both reliable and clinically significant change—results on par with tightly controlled RCT’s.  The study is a stunning confirmation of the effectiveness of psychotherapy.

This week, another mega-study was accepted for publication in the Journal of Counseling Psychology.  Once more, ROM was involved.  In this one, researchers Goldberg, Rousemanier, Miller, Whipple, Nielsen, Hoyt, and Wampold examined a large, naturalistic data set that included outcomes of 6500 clients treated by 170 practitioners whose results had been tracked an average of 5 years.

Their question?

Do therapists become more effective with time and experience?

Their answer?  No.

For readers of this blog, such findings will not be particularly newsworthy.  As I’ve frequently pointed out, experience has never proven to be a significant predictor of effectiveness.

What might be a bit surprising is that the study found clinicians’ outcomes actually worsened with time and experience.  That’s right.  On average, the longer a therapist practiced, the less effective they became!  Importantly, this finding remained even when controlling for several patient-level, caseload-level, and therapist-level characteristics, as well as when excluding several types of outliers.

Such findings are noteworthy for a number of reasons but chiefly because they contrast sharply with results from other, equally-large studies documenting that therapists see themselves as continuously developing in both knowledge and ability over the course of their careers.   To be sure, the drop in performance reported by Goldberg and colleagues wasn’t steep.  Rather, the pattern was a slow, inexorable decline from year to year.

Where, one can wonder, does the disconnect come from?  How can therapists’ assessments of themselves and their work be so at odds with the facts?  Especially considering, in the study by Goldberg and colleagues, participating clinicians had ongoing access to data regarding their effectiveness (or lack thereof) on real-time basis!  Even the study I blogged about—the largest in history where outcomes of psychotherapy were shown to be quite positive—a staggering 40% of people treated experienced little or no change whatsoever.  How can such findings be reconciled with others indicating that clinicians routinely overestimate their effectiveness by 65%?

Turns out, the boundary between “belief in the process” and “denial of reality” is remarkably fuzzy.  Hope is a  significant contributor to outcome—accounting for as much as 30% of the variance in results.  At the same time, it becomes toxic when actual outcomes are distorted in a manner that causes practitioners to miss important opportunities to grow and develop—not to mention help more clients.  Recall studies documenting that top performing therapists evince more of what researchers term, “professional self-doubt.”  Said another way, they are less likely to see progress where none exists and more likely to values outcomes over therapeutic process.

What’s more, unlike their more average counterparts, highly effective practitioners actually become more effective with time and experience.  In the article below, my colleagues and I at the International Center for Clinical Excellence identify several evidence-based steps any practitioner follow to match such results.

Do therapists improve (preprint)
The outcome of psychotherapy yesterday, today, and tomorrow (psychotherapy miller, hubble, chow, seidal, 2013)

Filed Under: Behavioral Health, evidence-based practice, Feedback, Feedback Informed Treatment - FIT, FIT, Top Performance Tagged With: excellence, outcome rating scale, psychotherapy

The Verdict is “In”: Feedback is NOT enough to Improve Outcome

September 21, 2015 By scottdm 17 Comments

Years have passed since I blogged about claims being made about the impact of routine outcome monitoring (ROM) on the quality and outcome of mental health services.  While a small number of studies showed promise, others results indicated that therapists did not learn from nor become more effective over time as a result of being exposed to ongoing feedback.  Such findings suggested that the focus on measures and monitoring might be misguided–or at least a “dead end.”

Well, the verdict is in: feedback is not enough to improve outcomes.  Indeed, researchers are finding it hard to replicate the medium to large effects sizes enthusiastically reported in early studies, a well-known phenomenon called the “decline effect,” observed across a wide range of scientific disciplines.

decline1

In a naturalistic multisite randomized clinical trial (RCT) in Norway, for example, Amble, Gude, Stubdal, Andersen, and Wampold (2014) found the main effect of feedback to be much smaller (d = 0.32), than the meta-analytic estimate reported by Lambert and Shimokawa (2011 [d = 0.69]).  A more recent study (Rise, Eriksen, Grimstad, and Steinsbeck, 2015) found that routine use of the ORS and SRS had no impact on either patient activation or mental health symptoms among people treated in an outpatient setting.  Importantly, the clinicians in the study were trained by someone with an allegiance to the use of the scales as routine outcome measures.

Fortunately, a large and growing body of literature points in a more productive direction.  Consider the recent study by De Jong, van Sluis, Nugter, Heiser, and Spinhoven (2012), which found that a variety of therapist factors moderated the effect ROM had on outcome. Said another way, in order to realize the potential of feedback for improving the quality and outcome of psychotherapy, emphasis must shift away from measurement and monitoring and toward the development of more effective therapists.

What’s the best way to enhance the effectiveness of therapists?  Studies on expertise and expert performance document a single, underlying trait shared by top performers across a variety of endeavors: deep domain-specific knowledge.  In short, the best know more, see more and, accordingly, are able to do more.  The same research identifies a universal set of processes that both account for how domain-specific knowledge is acquired and furnish step-by-step directions anyone can follow to improve their performance within a particular discipline.  Miller, Hubble, Chow, & Seidel (2013) identified and provided detailed descriptions of three essential activities giving rise to superior performance.  These include: (1) determining a baseline level of effectiveness; (2) obtaining systematic, ongoing feedback; and (3) engaging in deliberate practice.

I discussed these three steps and more, in a recent interview for the IMAGO Relationships Think Tank. Although intended for their members, the organizers graciously agreed to allow me to make the interview available here on my blog. Be sure and leave a comment after you’ve had a chance to listen!

Filed Under: Behavioral Health, evidence-based practice, Feedback, Feedback Informed Treatment - FIT, FIT

What do clinicians want anyway?

January 26, 2015 By scottdm 3 Comments

What topics are practitioners interested in learning about?

If you read a research journal, attend a continuing education event, or examine the syllabus from any graduate school course, you’re likely to conclude: (1) diagnosis; (2) treatment methods; and perhaps (3) the brain.  As I’ve written previously about, the brain is currently a hot topic in our field.

Ask clinicians, however, and you hear something entirely different.  That’s exactly what Giorgio Taska and colleagues did, publishing their results in a recent article in the journal, Psychotherapy.  Here’s what they found.

Regardless of age or theoretical orientation, the top three topics of interest among practicing clinicians were: (1) the therapeutic relationship; (2) therapist factors; and (3) professional development.

Let’s consider each one in turn.

Number one: the therapeutic relationship.  Honestly, when was the last time you attended a workshop focused solely on improving your ability to connect with, engage, understand, and relate to your clients?  The near complete absence of such offering is curious, isn’t it?  Especially when you consider that the quality of the therapeutic bond is the single best predictor of treatment outcome, the most evidence-based principle in the literature!

Paradoxically, research shows that therapists who are able to solicit negative feedback about the alliance early in the treatment process have better outcomes in the end.  Turns out, soliciting such feedback and using it to strengthen the working relationship is a skill fewclinicians–despite their beliefs to the contrary–possess.

There’s a simple solution: download and begin using the Session Rating Scale, a simple, four-item alliance measure designed to be administered at the end of each session.   Multiple, randomized clinical trials now show that formally seeking client feedback not only improves outcome but decreases both drop out and deterioration rates.

Number two: therapist factors.  In other words, you!

Some time ago, veteran psychotherapy researcher Sol Garfield–one of the editors of the prestigious Handbook of Psychotherapy and Behavior Change–called the therapist the “neglected variable” in psychotherapy research.  Available evidence documents that the clinician doing the therapy contributes 5 to 9 times more to outcome than the method used.

Which brings us to topic number three: professional development.

Large, multinational studies document the central importance that professional development plays in the identity and satisfaction of clinicians.  And yet, as I wrote not long ago, “the near ubiquitous mandate that clinicians attend so many hours per year of approved ‘CE’ events in order to further their knowledge and skill base has no empirical support.”  So, what does work?  Recent research by Singapore-based psychologist Daryl Chow shows that the best invest 4.5 more hours outside of work engaged in activities specifically aimed at improving their performance than their average counterparts–an process known as deliberate practice.

Filed Under: Behavioral Health, Conferences and Training, FIT

Therapist Wanted: Dead or Alive

January 15, 2015 By scottdm 1 Comment

Do you get those letters about the top healthcare providers in your area?

At the beginning of the new year, our city’s local magazine publishes a list of the top healthcare providers.  It’s a big deal.  Organized by location and specialty, the issue contains full-page photos, glossy spreads, and breezy write-ups.  Impressive stuff with a wide and hungry readership anxious to sort the best from the rest.

So, how do the publishers separate the proverbial “wheat from the chaff?”  The answer, depending on whether you are a provider or potential patient, may alternately surprise or frighten you.

Not long ago, Abigail Zuger received one of those letters.  In it, she learned that a relative of hers had been named “one of the worlds top physicians in his area of expertise.”   Ordinarily, she would have been proud.  There was only one problem.  Her now esteemed relative was dead–and not just recently.  He’d been dead 16 years!

Abigal Zuger is a physician and professor of medicine at Columbia University.  The story about her experience appeared in the New York Times.  In it, she notes the temptation to become cynical, to dismiss the Top Doc lists, “as just so much advertisement and avarice.”  She concludes, however, that a “more nuanced and charitable view is…[that] these services may simply be trying, valiantly if not clumsily, to remedy the single biggest mystery in all of health care…what makes a top doctor…[and] how to find one.”

Three methods dominate among list makers: (1) culling names and addresses from phone directories; (2) polling healthcare providers; and (3) collating patient online ratings.  Said another way, consulting available lists lets you know if your healthcare provider once had a phone, was liked by their colleagues, or managed not to piss off too many of the people they treated!

Remarkably absent from the criteria used to identify top providers is any valid and reliable measure of their effectiveness!

Determining one’s effectiveness as a mental health professional is not as difficult or time consuming as it was not long ago.  Whether you work with individuals, groups, or families, in inpatient, residential, or an outpatient setting, a simple set of tools is available for monitoring both the outcome and the quality of the services you provide.  The tools take minutes to administer and score and are free.

If you are worried about statistics, don’t be.  A variety of electronic solutions exist which not only will administer and score the measures but provide normative comparisons for assessing individual client progress and sophisticated analyses of provider, program, and agency effectiveness levels.

To see what’s possible, check out the Colorado Center for Clinical Excellence.  There, clinicians not only measure their effectiveness, but set benchmarks for superior performance and report clinician outcomes transparently on the agency website.

Filed Under: Conferences and Training, Feedback Informed Treatment - FIT, FIT, FIT Software Tools, Top Performance

Feedback Informed Treatment: Update

August 16, 2012 By scottdm Leave a Comment

Chicago, IL (USA)

The last two weeks have been a whirlwind of activity here in Chicago.  First, the “Advanced Intensive.”  Next came the annual “Training of Trainers.”  Each week, the room was filled to capacity with practitioners, researchers, supervisors, and agency directors from around the globe receiving in-depth training in feedback-informed practice.  It was a phenomenal experience.  As the video below shows, we worked and played hard!

Already, people are signing up for the next “Advanced Intensive” scheduled for the third week of March 2013 and the new three-day intensive training on FIT supervision scheduled for the 6-9th of August 2013.   Both events follow and are designed to complement the newly released ICCE FIT Treatment and Training Manuals.  In fact, all participants receive copies of the 6 manuals, covering every detail of FIT practice, from the empirical evidence to implementation.  The manuals were developed and submitted to support ICCE’s submission of FIT to the National Registry of Evidence Based Practices (NREPP).  As I blogged about last March, ICCE trainings fill up early.  Register today and get the early bird discount.

Filed Under: CDOI, Conferences and Training, evidence-based practice, Feedback Informed Treatment - FIT, FIT Tagged With: cdoi, icce

Yes, More Evidence: Spanish version of the ORS Validated by Chilean Researchers

June 16, 2011 By scottdm Leave a Comment

Last week, Chile.  This week, Perth, Australia.  Yesterday, I landed in Sydney following a 30 hour flight from the United States.  I managed to catch the last flight out to Perth before all air travel was grounded due to another ash clound–this time coming from Chile!  I say “another” as just over a year ago, I was trapped behind the cloud of ash from the Icelandic eruption!  So far so good.  Today, I’ll spend the day talking about “excellence” in behavioral healthcare.

Before heading out to teach for the day, I wanted to upload a report from a recent research project conducted in Chile investigating the statistical properties of the ORS.  I’ve attached the report here so you can read for yourself.  That said, let me present the highlights:

  • The spanish version of the ORS is reliable (alpha coefficients .90-.95).
  • The spanish version of the ORS shows good construct and convergent validity (correlations with the OQ45 .5, .58).
  • The spanish version of the ORS is sensitive to change in a treated population.

The authors of the report that was presented at the Society for Psychotherapy Research meeting conclude, “The ORS is a valid instrument to be used with the Chilean population.”

As asked in my blogpost last week, “how much more evidence is needed?”  Now, more than ever, clinicians needs simple, valid, reliable, and feasible tools for evaluating the process and outcome of behavioral healthcare.  The ORS and SRS FITS the bill!

Filed Under: FIT, PCOMS, Practice Based Evidence Tagged With: behavioral health, cdoi, Chile, evidence based practice, mental health, ors, outcome rating scale, session rating scale, srs

The "F" Word in Behavioral Health

April 20, 2011 By scottdm Leave a Comment

Since the 1960’s, over 10,000 how-to books on psychotherapy/counseling have been published—everything from nude marathon group therapy to the most recent “energy-based treatments.”  Clinicians have at their disposal literally hundreds of methods to apply to an ever growing list of diagnoses as codified in the Diagnostic and Statistical Manual of Mental Disorders (soon available in its 5th and expanded edition).

Conspicuously absent from the psychological cornucopia of diagnoses and treatments is the “F” word: FAILURE. A quick search of Amazon.com led to 32,670 results for the term, “psychotherapy,” 1,393 hits for “psychotherapy and depression,” and a mere 85 citations for “psychotherapy and failure.” Of the latter 85, less than 20 dealt with the topic of failure directly. There are some notable exceptions. The work of psychologist Jeffrey Kottler, for example. The dearth of information and frank discussion points to a glaring fact: behavioral health has a problem with failure.
The research literature is clear on the subject: we fail. Dropout rates have remained embarrassingly high over the last two and a half decades—hovering around 47%. At the same time, 10% of those who stay in services deteriorate while in care. Also troubling, despite the expansion of treatment modalities and diagnoses, the outcome of treatment (while generally good) has not improved appreciably over the last 30 or so years.  Finally, as reviewed recently on this blog, available evidence indicates that clinicians, despite what many believe, do not improve with experience.
In short, behavioral health is failing when it comes to failure. As a group, we do rarely address the topic. Even when we directly addressed, we find it hard to learn from our mistakes.
Our study of top performing clinicians and agencies documents that the best have an entirely different attitude toward failure than the rest. They work at failing. Everyday, quickly, and in small ways. In the lead article of upcoming Psychotherapy Networker, “The Path to Mastery” we review our findings and provide step-by-step, evidence-based directions for using failure to improve the quality and outcome of behavioral health. As we say in the article, “more than a dozen clinical trials, involving thousands of clients and numerous therapists, have established that excellence isn’t reserved for a select few. Far from it: it’s within the reach of all.” Getting there, however, requires that we embrace failure like never before.
At this year’s “Training of Trainers” (TOT) conference, building “mindful infrastructures” capable of identifying and using failure at the individual practitioner, supervisor, and agency level will be front and center. Please note: this is not an “advanced workshop” on client-directed, feedback-informed clinical work (CDOI/FIT). No lectures or powerpoint presentations. Participants get hands on experience learning to provide training, consultation, and supervision to therapists, agencies, and healthcare systems.
But, don’t take our word for it.  Listen to what attendees from the 2010 TOT said. Be sure and register soon as space is limited.

Filed Under: Behavioral Health, evidence-based practice, excellence, FIT Tagged With: behavioral health, brief therapy, Failure, holland, Jeffrey Kottler, meta-analysis, psychotherapy networker

Ohio Update: Use of CDOI improves outcome, retention, and decreases "board-level" complaints

August 5, 2010 By scottdm Leave a Comment

A few days ago, I received an email from Shirley Galdys, the Associate Director of the Crawford-Marion Alcohol and Drug/Mental Health Services Board in Marion, Ohio.  Back in January, I blogged about the steps the group had taken to deal with the cutbacks, shortfalls, and all around tough economic circumstances facing agencies in Ohio.  At that time, I noted that the dedicated administrators and clinicians had improved the effectiveness and efficiency of treatment so much by their systematic use of Feedback-Informed Treatment (FIT) that they were able to absorb cuts in funding and loss of staff without having to cut services to their consumers.

Anyway, Shirley was writing because she wanted to share some additional good news.  She’d just seen an advance copy of the group’s annual report.  “Since we began using FIT over two years ago,” she wrote, “board level complaints and grievances have decreased!”

In the past, the majority of such complaints have centered on client rights.  “Because of FIT,” she continued, “we are making more of an effort to explain to people what we can and cannot do for them as part of the ‘culture of feedback’….we took a lot for granted about what people understood about behavioral health care prior to FIT.”

The Crawford-Marion Alcohol and Drug/Mental Health Services Board is now into the second full year of implementation.  They are not merely surviving, they are thriving!  In the video below, directors Shirley Galdys, Bob Moneysmith, and Elaine Ring talk about the steps for a successful implementation.

Filed Under: Behavioral Health, Feedback Informed Treatment - FIT, FIT, Implementation Tagged With: addiction, behavioral health, cdoi, mental health, shirley galdys

SEARCH

Subscribe for updates from my blog.

[sibwp_form id=1]

Upcoming Training

Nov
04

Delberate Practice Café (PLUS) Fall 2025


Nov
20

FIT Implementation Intensive 2025

FIT Software tools

FIT Software tools

LinkedIn

Topics of Interest:

  • Behavioral Health (109)
  • behavioral health (5)
  • Brain-based Research (2)
  • CDOI (12)
  • Conferences and Training (63)
  • deliberate practice (29)
  • Dodo Verdict (9)
  • Drug and Alcohol (3)
  • evidence-based practice (64)
  • excellence (61)
  • Feedback (36)
  • Feedback Informed Treatment – FIT (231)
  • FIT (27)
  • FIT Software Tools (10)
  • ICCE (23)
  • Implementation (6)
  • medication adherence (3)
  • obesity (1)
  • PCOMS (9)
  • Practice Based Evidence (38)
  • PTSD (4)
  • Suicide (1)
  • supervision (1)
  • Termination (1)
  • Therapeutic Relationship (9)
  • Top Performance (38)

Recent Posts

  • Agape
  • Snippets
  • Results from the first bona fide study of deliberate practice
  • Fasten your seatbelt
  • A not so helpful, helping hand

Recent Comments

  • Typical Duration of Outpatient Therapy Sessions | The Hope Institute on Is the “50-minute hour” done for?
  • Dr Martin Russell on Agape
  • hima on Simple, not Easy: Using the ORS and SRS Effectively
  • hima on The Cryptonite of Behavioral Health: Making Mistakes
  • himalaya on Alas, it seems everyone comes from Lake Wobegon

Tags

addiction Alliance behavioral health brief therapy Carl Rogers CBT cdoi common factors continuing education denmark evidence based medicine evidence based practice Evolution of Psychotherapy excellence feedback feedback informed treatment healthcare holland Hypertension icce international center for cliniclal excellence medicine mental health meta-analysis Norway NREPP ors outcome measurement outcome rating scale post traumatic stress practice-based evidence psychology psychometrics psychotherapy psychotherapy networker public behavioral health randomized clinical trial SAMHSA session rating scale srs supershrinks sweden Therapist Effects therapy Training