A couple of weeks ago, the American Psychological Association (APA) released clinical practice guidelines for the treatment of people diagnosed with post-traumatic stress disorder (PTSD). “Developed over four years using a rigorous process,” according to an article in the APA Monitor, these are the first of many additional recommendations of specific treatment methods for particular psychiatric diagnoses to be published by the organization.
Almost immediately, controversy broke out. On the Psychology Today blog, Clinical Associate Professor Jonathon Shedler, advised practitioners and patients to ignore the new guidelines, labeling them “bad therapy.” Within a week, Professors Dean McKay and Scott Lilienfeld responded, lauding the guidelines a “significant advance for psychotherapy practice,” while repeatedly accusing Shedler of committing logical fallacies and misrepresenting the evidence.
One thing I know for sure, coming in at just over 700 pages, few if any practitioners will ever read the complete guideline and supportive appendices. Beyond length, the way the information is presented–especially the lack of hypertext for cross referencing of the studies cited–seriously compromises any strainghtforward effort to review and verify evidentiary claims.
If, as the old saying goes, “the devil is in the details,” the level of mind-numbing minutae contained in the offical documents ensures he’ll remain well-hidden, tempting all but the most compulsive to accept the headlines on faith.
Consider the question of whether certain treatment approaches are more effective than others? Page 1 of the Executive Summary identifies differential efficacy as a “key question” to be addressed by the Guideline. Ultimately, four specific approaches are strongly recommended, being deemed more effective than…wait for it… “relaxation.”
My first thought is, “OK, curious comparison.” Nevertheless, I read on.
Only by digging deep into the report, tracing the claim to the specific citations, and then using PsychNET, and another subscription service, to access the actual studies, is it possible to discover that in the vast majority of published trials reviewed, the four “strongly recommended” approaches were actually compared to nothing. That’s right, nothing.
In the few studies that did include relaxation, the structure of that particular “treatment” precluded sufferers from talking directly about their traumatic experiences. At this point, my curiosity gave way to chagrin. Is it any wonder the four other approaches proved more helpful? What real-world practitioner would limit their work with someone suffering from PTSD to recording “a relaxation script” and telling their client to “listen to it for an hour each day?”
(By the way, it took me several hours to distill the information noted above from the official documentation–and I’m someone with a background in research, access to several online databases, a certain facility with search engines, and connections with a community of fellow researchers with whom I can consult)
On the subject of what research shows works best in the treatment of PTSD, meta-analyses of studies in which two or more approaches intended to be therapeutic are directly compared, consistently find no difference in outcome between methods–importantly, whether the treatments are designated “trauma-focused” or not. Meanwhile, another highly specialized type of research–known as dismantling studies–fails to provide any evidence for the belief that specialized treatments contain ingredients specifically remedial to the diagnosis! And yes, that includes the ingredient most believe essential to therapeutic success in the treatment of PTSD: exposure (1, 2).
So, if the data I cite above is accurate–and freely available–how could the committee that created the Guideline come to such dramatically different conclusions? In particular, going to great lengths to recommend particular approaches to the exclusion of others?
Be forewarned, you may find my next statement confusing. The summary of studies contained in the Guideline and supportive appendices is absolutely accurate. It is the interpretation of that body of research, however, that is in question.
More than anything else, the difference between the recommendations contained in the Guideline and the evidence I cite above, is attributable to a deep and longstanding rift in the body politic of the APA. How otherwise is one able to reconcile advocating the use of particular approaches with APA’s official policy on psychotherapy recognizing, “different forms . . . typically produce relatively similar outcomes”?
Seeking to place the profession “on a comparable plane” with medicine, some within the organization–in particular, the leaders and membership of Division 12 (Clinical Psychology) have long sought to create a psychological formulary. In part, their argument goes, “Since medicine creates lists of recommended treatments and procedures, why not psychology?”
Here, the answer is simple and straightforward: because psychotherapy does not work like medicine. As Jerome Frank observed long before the weight of evidence supported his view, effective psychological care is comprised of:
- An emotionally-charged, confiding relationship with a helping person (e.g., a therapist);
- A healing context or setting (e.g., clinic);
- A rational, conceptual scheme, or myth that is congruent with the sufferer’s worldview and provides a plausible explanation for their difficulties (e.g., psychotherapy theories); and
- Rituals and/or procedures consistent with the explanation (e.g., techniques).
The four attributes not only fit the evidence but explain why virtually all psychological approaches tested over the last 40 years, work–even those labelled pseudoscience (e.g., EMDR) by Lilienfeld, and other advocates of guidelines comprised of “approved therapies.”
That the profession could benefit from good guidelines goes without saying. Healing the division within APA would be a good place to start. Until then, encouraging practitioners to follow the organization’s own definition of evidence-based practice would suffice. To wit, “Evidence based practice is the integration of the best available research with clinical expertise in the context of patient (sic) characteristics, culture, and preferences.” Note the absence of any mention of specific treatment approaches. Instead, consistent with Frank’s observations, and the preponderance of research findings, emphasis is placed on fitting care to the person.
How to do this? The official statement continues, encouraging the “monitoring of patient (sic) progress . . . that may suggest the need to adjust the treatment.” Over the last decade, multiple systems have been developed for tracking engagement and progress in real time. Our own system, known as Feedback Informed Treatment (FIT), is being applied by thousands of therapists around the world, with literally millions of clients. It is listed on the National Registry of Evidence based Programs and Practices. More, when engagement and progress are tracked together with clients in real time, data to date document improvements in retention and outcome of mental health services regardless of the treatment method being used.
Until next time,
Scott D. Miller, Ph.D.
Director, International Center for Clinical Excellence