Evidence-based therapies such as cognitive-behavioral therapy are to be delivered in Australia by psychologists registered with Medicare under the Better Access to Mental Health program. CBT is the most widely researched kind of psychotherapy and it is the basis upon which the Better Access program is funded.
Consumers complain that if they are depressed or anxious, the last thing they feel like doing is judging their mood on a scale of one to 10 every half hour and predicting how they would be feeling in the next half hour. This kind of writing homework is so overwhelming and nonsensical to them that they become even more depressed. Some will actively look for therapists who don’t use CBT because this approach doesn’t work for them.
Some practitioners, on the other hand, seek to get around this requirement by defining everything they do in CBT terms. This method ensures they are on the right side of the authorities, even if it’s at the expense of a client’s well-being.
It’s not my intention here to argue about the relative merits of CBT or question the practices of other therapists. However, I do question the status quo of our current approach to funding and urge a more realistic approach to mental health care in the interests of all who seek relief from their emotional pain.
Up until 1999, the scientist-practitioner model was upheld as the ideal to which professionals should aspire (Henshaw, 2000). It espouses that psychologists should discerningly consume new research findings, put them into practice in an applied setting, evaluate their own interventions using empirical methods and then disseminate their results to the wider scientific community in the form of published articles (Barlow, Hayes & Nelson, 1984).
However, in practice, this hardly ever happens. Few clinicians use research as their primary source of information when treating clients. They complain that studies are too concerned with methodological issues such as statistical sampling and reliability, which severely limits their applicability in treating individual clients (Barlow et al., 1984, Cohen, Sargent & Sechrest, 1986, Polkinghorne, 1991).
The obvious failure of the scientist-practitioner model has inspired some to challenge why psychologists should encourage a dishonest adherence to this ideal (e.g., John, 1998). However, until fairly recently, no practical alternative has been suggested. Flawed though it has been, the scientist-practitioner model has allowed more scope than no model at all to offer treatment provision services in an environment of economic rationalism, evidence-based interventions and competition from other health professionals and paraprofessionals. Adherence to it persuades policy-making bodies such as governments and insurers to continue their support of psychology (Cotton, 1998).
Although objective decision-making is greatly emphasized in postgraduate education, Cohen et al. (1986) found that in clinical practice, it played a minor part in the selection of treatments. Even when psychologists attempted to practice therapy according to the scientist-practitioner ideal, there was still much room for subjectivity.
It has been estimated that as much as 90 percent of the observed variance in natural settings is accounted for in the professional literature. Accordingly, practitioners who share the same theoretical framework can make quite different treatment decisions, whereas those from different therapeutic traditions often make similar treatment decisions even though their post-hoc rationalizations of this decisions vary greatly (John, 1998).
An insight into the way psychologists use the psychotherapy literature in clinical decision-making processes was provided by Cohen et al. (1986). They argued that an important aspect of the problem not addressed by Barlow et al. (1984) was the definition of “research use.” Therefore, two types of usage were proposed: instrumental and conceptual.
Instrumental usage occurs on several different levels. For example, a clinician may be aware of empirical studies, but not use the data in making decisions. On another level, decisions might be considered on the basis of published data, but rejected because of impeding factors such as cost, inconvenience or perceived irrelevance for a particular client. Finally, research might be implemented if it is deemed useful to do so.
Instrumental implementation has been viewed as the most appropriate definition of research use, but it rarely occurs in clinical practice (Barlow et al., 1984). Alternatively, conceptual usage, which refers to the gradual and implicit influence literature has on awareness of clinical issues, problem-conceptualization and decision-making, is quite widespread (Cohen et al., 1986).
Since the 1999 publication of Hubble, Duncan and Miller’s groundbreaking book, “The Heart and Soul of Change” and its updated reprint in 2010, we have an alternative explanation of how therapy works. It has very little to do with the type of model used (yes, even CBT). Using the results from hundreds of studies published over several decades and convincing data from a number of meta-analytic studies, they concluded that therapy works and proposed a four-factor model to account for the observed change. The four factors are:
- A good relationship with your therapist (30 percent of the observed change)
- Strengths and resources that, together with your therapist, you will uncover, use, enhance and add to (40 percent of the observed change)
- Placebo effect (15 percent of the observed change)
- Specific model (15 percent of the observed change)
The authors call the equivalence of effectiveness among psychotherapies the “Dodo verdict” after Lewis Carroll’s “Alice In Wonderland,” where the Dodo says “Everybody has won and all must have prizes.” They argue that if the first three factors are strongly met, then even something such as religious healing can be as effective as any psychological therapy model.
They propose that rather than using the evidence-based, outdated scientist-practitioner model, clinicians should use an outcomes-based model instead. An outcomes-based approach tracks the four factors very carefully. Its advantages far outweigh the earlier model for tracking important information such as the early identification of clients who are likely to drop out, detection of clients who are not benefiting from treatment or clients who are deteriorating (Frost, 2015).
It measures the effectiveness of each individual session a client has using the Session Rating Scale, which determines how effective the client-therapist relationship is and whether the client feels heard, understood and respected. It also assesses whether the session addressed goals and topics that are important to the client, whether the model the therapist uses is a good fit and the session’s overall effectiveness.
In between sessions, the Outcome Rating Scale assesses how the client has progressed in the week between sessions in terms of personal well-being, relationship and social functioning and overall. These measures are tracked over the course of therapy so that the therapist can adjust his or her approach to suit the client and improve overall therapeutic effectiveness.
I, for one, long for the day when consumers of mental health services are heard, respected and their needs are being met. The costs of treating mental health problems are far outweighed by the benefits to the economy in terms of increased productivity and a happier, healthier workforce.
I also long for a more common sense approach to mental health care that doesn’t involve an attitude from the authorities that psychologists are a bunch of cowboys that need to be reined in and justify everything they do according to an outdated but favored evidence-based, scientist-practitioner model.
Let’s put the needs of our clients first and foremost and focus on working with them toward enhancing their emotional well-being. Let’s leave behind the divisive politicking and funding based on a rigid adherence to a convenient, but out-of-reality approach.
For practical guides on how to work through common life problems (anxiety, depression, workplace bullying and relationships), you are welcome to receive free copies of my e-books by subscribing to my list at http://www.henshawconsulting.com.au.
Barlow, D. H., Hayes, S. C., & Nelson, R. O. (1984). The Scientist Practitioner. New York: Pergamon.
Cohen, L. H., Sargent, M. M., & Sechrest, L. B. (1986). Use of psychotherapy research by professional psychologists. American Psychologist, 41, 198 – 206.
Cotton, P. (1998). The framing of knowledge and practice in psychology: A reply to John. American Psychologist, 33, 31 – 37.
Frost, A. (2015). Better Access To Psychologists: Is It Value For Money?
Henshaw, S. (2000). Living With Unemployment: A Grounded Theory Study. Doctoral Thesis: Murdoch University.
Duncan, B. L., Miller, S. D., Wampold, B. E., & Hubble, M. A. (2010). The Heart and Soul of Change: Delivering What Works in Therapy. Washington: American Psychological Association.
John, I. (1998). The scientist-practitioner model: A critical examination. Australian Psychologist, 33, 24 – 30.
Polkinghorne, D. E. (1991). Two conflicting calls for methodological reform. The Counseling Psychologist, 9, 103 – 114.
Alice and the dodo image available from Shutterstock