top of page
Marcus Chatfield

Setting the Record Straight, Part 4

Updated: Aug 30, 2023

Editor’s note: Marcus Chatfield continues his series on Straight, Inc., the coercive treatment program for children and teens suspected of drug use that flourished with White House and NIDA support in the 1980s. In today’s entry, Marcus breaks down the flaws in the peer-reviewed research that helped cement this official legitimacy.

In “Outcome of a Unique Youth Drug Abuse Program: A Follow-up Study of Clients of Straight Inc.” (1989), Alfred S. Friedman, Richard Schwartz, and Arlene Utada claim that their report will include: “(1) the description of the study sample, (2) the outcome of the improvement that occurred between intake and follow-up, (3) the comparison of the outcome between graduates and ‘dropouts,’ and (4) the relationship of the amount of time in treatment to treatment outcome.”

"This report is in fulfillment of NIDA Professional Services Contract #64986."

In their sights.


However, in each of these areas the study is flawed: (a) their description of the study sample reveals major problems, such as selective sampling; (b) the intake-to-follow-up comparisons show limited correlation and also, the authors state that they are meant to measure outcome of improvement rather than actual outcome; (c) they completely fail to discuss their promised comparisons between graduates and dropouts (they also claim to discuss a comparison between “respondents” and “nonrespondents” and then omit this comparison as well); and (d) perhaps most importantly, but left unexplained, they found that “time in treatment” had no effect on drug use reductions.

For unexplained reasons Schwartz, Straight’s medical director and the co-author attributed with providing the data, recruited both dropouts and graduates for the same population sample, simply stating that “of the 222 in the follow-up study, 75 (34%) were ‘withdrawers’ or ‘drop-outs’ from the program, and 147 were graduates.” Also unexplained are his reasons for recruiting twice as many graduates as dropouts and his reasons for combining their data when measuring the effects of “treatment.” He states that some of the dropouts he selected had programs less than one month in duration, while graduates had programs as long as 28 months. This may help explain why the authors decided to avoid defining the very “treatment” they were studying.

There is no indication that Schwartz made any effort to recruit a random study sample. The authors do not describe the sample selection process other than stating that 222 participants were recruited and half of these were contacted more than once before agreeing to participate. There is no explanation given as to how these 222 were chosen out of the 330 he mentions as possible candidates. It is fair to assume that the willingness to participate, among these 222, reflects a population with more favorable views of the program, but this subject is not discussed by the authors. The study states that Schwartz personally knew many of the clients’ families, and because he does not describe his methods, the reader is left to wonder whether or not his personal knowledge of how participants might have answered had an influence on his selection process. There is no mention of whether or not he attempted to recruit those who were known to have negative sentiments towards the program. The authors state that this research was conducted in 1986, by which time there were certainly large numbers of disgruntled families and former clients.

A selection process.

An opaque selection process.


According to the program description in the article, the average time required for graduation was 10-14 months, but the average time in treatment for his sample is 13.82 months. Considering that one third of his participants were in the program for as little as “under a month,” which would greatly affect the average, either his math is off or the program description is wrong (the later is probably the case in this instance). The authors explain that “most of this [program] description is derived from information provided by the administration of Straight, Inc.” Schwartz fails again to mention that he was part of this administration.

Within this non-random population sample, and among clients ranging from 13 to 21+ years old, the authors found that reported drug use at follow-up was less than reported drug use before treatment, both in prevalence and in frequency. Then they admit that they asked different questions at the follow-up than were asked at intake, and the comparison was flawed by design. As I mentioned in a previous entry, they state that upon intake, clients were asked if they had “ever” used the drug; but upon follow-up, the time frame asked about was “in the last month.” For prevalence-of-use comparisons, a similar discrepancy skewed their findings. They acknowledge that this would produce an invalid basis of comparison but they go on to repeatedly cite their “Change Factor” as though it were relevant.

They give no explanation for the discrepancies inherent in their survey design, but one factor to consider is that the intake survey was not meant to assess an individual. Rather, it was meant to incriminate. I happen to know this from personal experience, but the authors also acknowledge this, saying, “During the intake interview (which occurred before this study was planned), when the client was questioned about whether he or she had used each substance, no precise time period of time frame was indicated by the examiner.”

Dariusz Kowalski, “Interrogation Room” (film/installation, 2009)

Dariusz Kowalski, “Interrogation Room” (film/installation, 2009)


A few paragraphs on, they inexplicably assert that clients might have tended to over-report drug use at intake, but they do not give a reason to presume this. Among the former clients I’ve known, the vast majority of those who were dishonest at intake seem to have under-reported, in an attempt to be disqualified from treatment. The authors say, however, that “clients might have tended at the intake interview to report ‘peak’ or maximum frequencies of use of the various substances for the period before treatment.” There was no logical reason for any client to exaggerate drug use at intake, unless the survey questions themselves were geared towards eliciting exaggerated amounts. It’s not uncommon to hear former clients report being held in the intake room until they confessed to substantial drug use, and some report finally giving in to pressure to admit to things that were not true in order to be allowed out of the intake room.

Check all that apply.

Check all that apply.


While the authors admit that design discrepancies skewed their findings, they offer no explanation as to why the intake survey was constructed this way, and they fail to disclose that clients were not allowed to leave the intake room until they “got honest” about their drug use. They also fail to disclose that these intakes (and strip searches) were conducted by teenagers whose only training was their own experience of the intake procedure as a new client. In addition to these omissions, the authors mislead the reader by claiming that a comprehensive psychiatric exam was conducted at intake. In my research and conversations with former clients over the last 20 years, no one mentioned being given a psychiatric exam at intake.

The authors acknowledge that clients may have under-reported at follow-up and that there are legitimate concerns with the validity, objectivity, and usefulness of self-report evaluations, but they say this is a generally accepted practice and a fairly reliable method of evaluation. This may be true in general, but I would assert that self-reporting is an inappropriate method of collecting data among those who could be put in jeopardy by their participation. Self-reporting can only be considered an ethical (and reliable) practice when researchers are not placing their participants in danger by seeking admissions of drug use.

What do all these things have in common?

What do all these things have in common?


The authors acknowledge that there is no way of knowing how much of a client’s improvement was due to treatment, and how much was due to other factors. There was no control group in their study and the authors explain that part of the difficulty in arranging to study a control group was that it would have been unethical to deny a client “treatment.” But by the time of their study there would have been 183 “drop-outs” who were removed from treatment by concerned parents, and could possibly have served as a control group. As they say earlier in the review of the literature, “drop out, rather than completion or graduation, is the rule,” yet no discussion of this possibility is given. As I mentioned earlier, they do mention that they include in their follow-up data reports from some dropouts that were essentially “untreated.”

In addition to the need for a control group, to be able to imply causality between two variables, these variables must be operationally defined. In the Straight follow-up study, 33% of the participants were as young as 13, but 22% were 18 or older. The length of their participation in the program varied between less than one month to as long as 28 months. There were such vast differences between individual client experiences, so little definition of the treatment, and such vast differences among clients, that causality would have been weak even if there had been a control group.

The authors state that even if their statistical comparisons are refuted because of these serious problems, clients and parents reported that they were “helped.” They say 85% of clients reported that Straight helped them reduce their drug use, and 92% reported that Straight helped them in some way. In short, the project showed how clients and parents reported their perceptions of their participation in the Straight program. These perceptions were then presented as scientific evidence that Straight “worked.” I have not addressed in this essay the parent surveys that were part of their research, but one former client I interviewed stated that his parents filled out this survey on his behalf. This is consistent with the authors’ disclosure that parents were asked to report their own satisfaction with the program as well as their perceptions of their child’s drug use. They were also asked to report their child’s suicidal thoughts and their church attendance.

Straight’s follow-up study was funded by the National Institute for Drug Abuse (NIDA) and peer-reviewed by the Journal of Substance Abuse Treatment. Such blatant misuse of public funds is a travesty, but what’s worse is the way this propaganda was used and why it was necessary in the first place.

Next week: The participants’ perspective, a program’s legacy.

2 views

Recent Posts

See All
bottom of page