A Quantitative Review of
the Outcome Research.
Paul J. Woods,
Hollins College, Virginia.
From:Lyons, Larry C. & Woods, Paul J. (1991).The efficacy of rational emotive therapy: A quantitative review of the outcome research.Clinical Psychology Review, 11, 357-369.
We thank Wendy A. Morris for her assistance in preparing this manuscript. Without her inestimable help, this project would not have come to fruition.
Larry C. Lyons, MA was a Research Associate at the Hollins Communications Research Institute when this article was published in Clinical Psychology Review. He is currently both a consultant and with a private firm in Manassas, Virgina.
Paul J. Woods Ph.D.is a Professor of Psychology (Emeretus) at the Department of Psychology at Holllins College, and director of the Institute for Rational Therapy and Behavioral Medicine in Roanoke Virginia
Address request for reprints of this article, coding manual or list of studies used in this quantitative review to the second author at:
Paul J. Woods Ph.D. Institute for Rational Therapy and Behavioral Medicine, 5541 Florist Road Roanoke, Virginia
Abstract
RET) outcome studies were reported. There were 236 comparisons of RET to baseline, control groups, Cognitive Behavior Modification, Behavior Therapy, or other psychotherapies were examined.
The results from a meta-analysis of 70 Rational Emotive Therapy (The results indicated that subjects receiving RET demonstrated significant improvement over baseline measures and control groups. The results showed no significant differences in effect size between those studies which used psychotherapy clients or students as subjects. Effect size was significantly related to therapist experience and to duration of the therapy. The results indicated that those comparisons which were rated high in internal validity had signficantly higher effect sizes than medium validity studies. Outcome measures rated as low in reactivity had significantly higher effect sizes than more reactive measures.
Contrary to other reviews using the narrative review method, RET was found to be an effective form of therapy. However, this conclusion was tempered by methodological flaws of the studies reviewed, such as lack of follow-up data and information regarding attrition rates.
Rational Emotive Therapy (RET) has become one of the most accepted forms of Cognitive Behavior Modification since its initial development in the late 1950's and early 1960's. (Ellis, 1957; 1962; Gregg, 1973). Narrative reviews have either supported RET's efficacy or criticized its usefulness. Using a quantitative review method (meta-analysis), this study examines the efficacy of RET and addresses many of the criticisms advanced by previous reviews of RET.
Ledwidge (1978) reviewed 13 outcome studies comparing Cognitive Behavior Modification (CBM) to Behavior Therapy. Of the 13 studies, six were based on either RET or Systematic Rational Restructuring. He concluded that although these studies suggest no differences between behavior therapy and Cognitive Behavior Therapy (and by extension RET), this was only apparent because none of these studies used a clinical sample. On the basis of these findings, Ledwidge concluded that behavior therapy was the superior form of treatment.
DiGiuseppe and Miller (1977) reviewed 22 studies which examined the effectiveness of RET or a closely related therapy. These studies were categorized in two sets: comparative studies, which compared RET to some other psychotherapy; and non-comparative research which examined RET to baseline scores or against some form of control. DiGiuseppe and Miller concluded that many of the studies had a variety of methodological problems typical of most psychotherapy research. They found RET to be considerably more effective than no treatment control and baseline scores.
Zettle and Hayes (1980) reviewed 35 studies which investigated the theoretical assumptions, individual treatment components, and overall effectiveness of RET. They claimed all available outcome research were either unsystematic case studies or consisted of analogue research. Less than half (16) of the 35 studies were outcome experiments. Based on these 16 studies, the authors concluded that the clinical efficacy of RET was not demonstrated. Prochaska (1984) surveyed eight studies which examined the effectiveness of RET. From this limited sample, he concluded that these studies demonstrated only equivocal results and the most positive application of RET was for reducing common anxiety. Based on their review of 47 RET outcome studies, McGovern and Silverman (1984) concluded that these studies supported the efficacy of RET.
Reviewers of RET outcome literature have raised many criticisms regarding RET outcome research. One such criticism involves the issue of "elegant" vs. "inelegant" RET (Ellis, 1980). Ellis describes inelegant RET as other forms of cognitive behavior modification, while "elegant" RET refers to the specific approach advocated by Ellis and others. Similarly, Wessler (1983) claimed the researcher frequently misunderstands the therapeutic approaches of RET, or else mislabels or misrepresents the procedure until it is no longer RET. Thus, the question to be investigated is : Does the degree of similarity of the treatment to RET influence effect size.
Several reviewers have criticized the methodological quality of RET outcome studies (e.g., DiGuiseppe and Miller, 1977; Ledwidge, 1978; Prochaska, 1984; Zettle and Hayes, 1980). Their concerns include low percentage of male subjects; subject solicitation; students vs psychotherapy clients; the number of subjects per comparison; subject and therapist assignment to treatment or control groups; therapist training; treatment duration; individual vs group therapy; and reactivity and type of outcome measure.
Another area of investigation is the issue of using students versus clinical subjects. Ledwidge (1978) and Zettle and Hayes (1980) note in their reviews that most RET studies used students volunteers as subjects. They question the external validity of these studies, since the problems assessed may not be applicable to the typical client. While RET may indeed prove effective with students suffering from test anxiety, it is possible that RET may be relatively ineffective when dealing with more serious clinical problems, like depression, or agoraphobia. In other words, are there differences in effect sizes among studies which use students as opposed to psychotherapy clients?
Reviewers disagree on the therapeutic effectiveness of RET. This point constitutes the core of the present review, what is the therapeutic efficacy of RET? This point is difficult to answer in the context of an individual study or the narrative review, because reviewers can reach very different conclusions given the same evidence. The quantitative review format directly address the question of the therapeutic efficacy of RET without reviewer bias.
Method
SelectionofStudies:
ThePsychologicalAbstractsandDissertationAbstractsInternationalfrom 1972 to 1988 were searched for relevant studies. In addition, the reference lists of the obtained studies and the previously mentioned reviews were scanned for additional material. A list of the studies used in this meta-analysis can be seen in the appendix following this article.
Each study had to meet the following inclusion criteria:
(1) At least one treatment group received Rational Emotive Therapy, or a treatment which used elements of RET.
(2) The study compared RET to a baseline measure, a control group, or other type of therapy.
(3) The study used a quantitative statistic which could be converted to an effect size estimate.
(4) The study gave the number of subjects in each treatment or control group.
Seventy studiesmet these criteria, yielding236 comparisonsbetween RET and a baseline measure, a control group, or other form of therapy.
Studies were rejected because of uninterpretable statistics, or because insufficient information regarding treatments, or experimental procedures were presented. Single subject designs and case studies were also excluded.
EffectSizeEstimation
Each comparison of RET to the baseline assessment, control, or treatment groups was expressed in terms of the Standardized Difference between Mean Scores (d; Cohen, 1977). Some researchers (e.g., Glass 1976, 1977; Smith & Glass, 1977; Smith et al, 1980) advocate using the comparison group standard deviation as the denominator ford. However, there are two reasons against using the comparison group standard deviation, first, the within-subjects standard deviation has approximately half the sampling error of the comparison group; second, the within-subjects standard deviation generally provides a more accurate estimate of the populations(Hunter et al, 1982). Afterdis calculated for each study, the effect sizes are averaged.
Where possible, effect sizes were obtained by directly calculatingdfrom the means and standard deviations reported in the individual study. Otherwisedwas calculated fromt,f,r, or a probability value using procedures taken from Cohen (1977) or Hunter et al (1982). If the comparison was taken from a two-way ANOVA, the statistic was first converted to theetastatistic using an algorithm taken from Hasse (1983). This correlation coefficient was then converted todusing procedures outlined by Cohen (1977) and Hunter et al (1982).
When the exact probability level was given, this value was converted into azscore and then converted to a point-biserial correlation. Adwas then estimated using the previously mentioned procedures outlined by Cohen (1977) and Hunter et al (1982). Unfortunately in some studies, an exact statistic was not available. An approximation procedure was used to estimated. Where nonparametric statistics or multiple comparison procedures (e.g., Duncan's Multiple Range test or Newman-Keul's test) were used, the associated probability value (.05, .01, or .001) was converted first to azscore and then to adstatistic using the previously discussed procedures.
If the study only stated a significant difference, the associated probability value was set to .05, and the appropriatedstatistic was calculated using the above conversion algorithms. Similarly, where the term no significant difference was reported, the effect size was assumed to be 0.
In comparisons ofd's derived from test statistics (e.g.,t,F, or means and standard deviation) to those derived from probability values taken from the same statistics, the effect size estimates of the test statisticdwere less conservative estimates than those derived fromp.
The majority of the studies included in this meta-analysis used more than one outcome measure. Other meta-analyses (e.g., Smith et al, 1980) included all nonredundant measures in the analysis. The main problem with this approach is the analysis is based on on dependent effect sizes biasing the results by giving more weight to those studies with multiple effect sizes. To avoid this bias, the present study averaged the effect sizes to produce a single statistic within each comparison of RET, preserving the independence of each comparison.
CodingProcedures
After converting the study statistics to effect sizes, study characteristics were examined and coded using a 28 variable coding scheme. A complete list of the articles and dissertations used in the present meta-analysis is available from the author for a $5 fee to cover printing, postage, and handling. These coding variables included the year of the comparison (either publication or acceptance date), and level of therapist training.