EDITOR'S NOTE: This article has been expanded by
Witte into a book titled The Market Approach to Education: An Analysis of
Americas First Voucher Program (Princeton University Press, 1999). From the PHI DELTA KAPPAN, September 1999 The ugliest-outcome of the Milwaukee Parental Choice Program - if we consider the program's initial purpose and intent - may be yet to come, Mr. Witte warns. -BY IOHN F. WITTE The program was initially limited in a number of ways: families had to have incomes of 175% of the poverty line or less, students could not have been in private schools in the previous year, only secular private schools could participate, schools had to select choice students randomly if there were more applicants than available seats in a grade, and there were limits placed on the number of voucher students in each school (originally 49%, raised to 65% in 1993) and in the program as a whole (originally approximately 950, raised to 1,500 in 1993). By subsequent court order, the private schools were not required to admit students with disabilities. The voucher amount was set at the per-pupil state aid that would have gone to the Milwaukee Public School (MPS) system. The vouchers have risen over the years from approximately $2,500 to $4,900 per child. In the fall of 1990, I was asked by the Wisconsin Department of Public Instruction to evaluate the voucher program. I did so for five years, ending with a Fifth-Year Report issued on 31 December 1995. I raised funds for the evaluations from private foundations and the University of Wisconsin. My compensation came only in the form of partial released time from teaching obligations. Our research team collected data using surveys, case-study methods, outcome measures, and administrative data. We wrote five annual reports and have put the Fourth-and Fifth-Year Reports (which are cumulative), as well as the quantitative data and other papers and articles, on the Internet. I encourage readers to refer to the original reports and papers, which provide most of the evidence for this summary article. The results of this "experiment" are not easily or simply summarized. As previous reports attest, there were positive and negative results of the program. In the last several years, a great deal has been written about the Milwaukee Parental Choice Program, most coming from voucher supporters who have been unhappy with some aspects of our reports and conclusions. Findings from both our research and that of our critics were used in the legislative debates in 1995 that led to the expansion of the program to include parochial schools; they were also cited in subsequent court cases. This short article summarizes what I believe are the good, the bad, and, unfortunately, the ugly aspects of the voucher program in Milwaukee. 2
The Good One of the key issues in the theory and study of educational vouchers is whether it is possible to design a voucher program targeted toward poor families, providing them with opportunities similar to those available to the middle class. Results from the Milwaukee experiment indicate that we can do that. The voucher program, as originally designed, attracted very poor, mostly minority families whose children were not doing well in the public schools. For example, 93% of the choice students were nonwhite, and the average family in--come was approximately $12,000 per year. The parents had also become very dissat-isfied with their prior public schools and thus were looking for an alternative. Five years of consistent data from separate sets of applicants substantiated these patterns. Alternatives to MPS for these families were not readily available. Moving to the suburbs was very difficult, given the high level of city/suburban segregation in Milwaukee and the paucity of low-income housing in the suburbs. In addition, low family incomes made payment of private school tuition - even the average Catholic school tuition in 1991 of $1,200 per child - a considerable burden. Thus it seems clear that there was a demand for this type of alternative program. And our findings indicate that it is possible to design and implement a program that does not skim off the most qualified students for private schools. There were beneficial results for both families and private schools. For example, we found that parents who responded to yearly follow-up surveys at the end of the school year were much more satisfied with the private schools than with their former public schools and nearly unanimously supported the continuation of the program. In addition, their relatively high levels of parental involvement tend to increase in the pnivate schools. Finally, attendance was slightly higher than in the public schools. Because students were almost all in the lower elementary grades, however, attendance was over 90% for both populations. The private schools also benefited from the program, which we interpreted positively because we felt that having viable school alternatives in inner cities is beneficial to everyone. In fact, many of the schools in the program also worked with MPS in taking in "contract" students -students from MPS who were having difficulty in the public schools. Subsequently, several of those schools converted to district-approved charter schools. In several cases, these improvements represented remarkable changes. When the program began, two of the original seven schools were on the verge of bankruptcy. They are now solvent. With federal funds and private donations, one of those schools was able to build a beautiful new building, which has subsequently been expanded. Several other schools increased their enrollments. Teacher turnover also declined, and the faculties became more diverse racially and included more male teachers. Voucher supporters warmly embrace these findings and repeat them often. What they do not repeat -- and often try to reanalyze and interpret away -- are the "bad" results.
The Bad Although the parents of applicants to the voucher program were poor and unhappy with public schools, they were also better educated, had higher educational aspirations for their children, and placed a higher value on education than a random sample of MPS parents. They were also considerably more involved in their prior public schools and with their children's education at home. These facts are problematic for choice supporters for several reasons. First, if these parents had remained in the public schools, they could have been a potent force for change. They were educated, involved, placed an emphasis on education, and were angry. Second, these characteristics might help to explain their children's subsequent achievement success apart from the magic attributed to the private schools. For example, parental level of education is always a significant predictor of children's achievement gains. Those who have reanalyzed our data have often excluded statistical controls for these characteristics in estimates of the achievement of choice students.'Although the voucher program was clearly beneficial in the minds of many choice parents, on average, approximately 30% of the choice students left the choice schools each year (i.e., they did not graduate and could have returned but did not). Attrition was cumulative, so that, for example, after four years, the first-year cohort of 341 students had only 85 students remaining. The majority of those who left returned to MPS. This attrition was not solely related to dissatisfaction with the private schools. Nevertheless, it remains unclear to me how these schools can work the miracles some choice supporters claim if the children are not in the schools. Choice supporters often repeat the many laudatory notes about the private schools contained in our reports. But advocates rarely mention the three private schools that went bankrupt or the one that ceased operation in the third year of the program. The bankruptcies forced more than 350 choice students to change schools mid-year. Two of the principals of these private schools were indicted, and as of this writing one is in prison. Students in these schools were never tested and so were not included in analyses of achievement scores. The issue of achievement test scores has dominated the discussion of the Milwaukee voucher program. Some interpret our conclusions - that there were no demonstrable or consistent differences in test scores between choice students and a random sample of MPS students who did not apply for vouchers - as negative. I do not interpret these results negatively. Our reports indicate that for both public and private inner-city schools to hold their own in terms of national achievement gains is not a bad result. We also repeatedly point out the fallibility of standardized achievement tests as the primary measure of educational achievement. Unfortunately, in the elementary grades, other quantitative indicators such as class grades, drop-out rates, course completions, or behavioral records were either not available or not useful. Although many more sophisticated statistical models were employed in our analysis, the essence of the comparison between choice students, a random sample of all MPS students, and low-income MPS students who would have qualified for vouchers is captured in Figures 1 and 2. These figures depict the change in scores from year to year on the Iowa Tests of Basic Skills (ITBS) in reading and math over the four separate years of the program. The tests were given in May, and the measure was normal curve equivalents (NCEs), which range from 1 to 99 with a mean of 50 and a standard deviation of about 18. As is apparent, there was some volatility between years and across groups. For example, choice students did well in reading during the first year but did very poorly in the second year. In the last two years there was little change. In contrast, MPS students improved in both reading and math in year one, improved slightly in reading in year two, but then declined in both reading and math in other years. In comparing the statistically significant differences (at the.05 level) between choice students and either of the MPS groups, only two differences were significant: the second-year advantage of MPS students over choice students in reading and the reverse advantage of choice students over MPS students in math in the third year. Thus there was no pattern of superiority of choice students over MPS students or vice versa. And these results held up with a range of more complex and diverse statistical models. Subsequent analyses of the test score data by voucher supporters focused almost exclusively on a comparison group that critics claim we neglected. That group is what we called the "rejects" - students who applied to the voucher program but were not admitted to the private schools. The belief that this group is important is based on the assumption that there is some factor present in the set of applicants to the program that would affect educational outcomes but that we could not observe. Thus the rejected group could be very meaningful because we could assume that un-measured characteristics of those who applied would be the same for those selected and rejected - as long as students were randomly rejected. Thus there was the potential for a wonderful "natural" experiment. Far from neglecting this group, we carefully followed them from 1990 to 1995. Unfortunately, for several reasons this potential natural experiment fell apart. First, the randomization process was contaminated. A court decision determined that the private schools need not accept disabled students applying under the voucher program. However, reports were not required from the private schools, and thus disabled students who were rejected could not be distinguished from those randomly turned down. Second, there were very few rejected students, especially in the early years, and they were much more likely to be in the lower grades (prekindergarten through grade 2). But even more problematic: of all the students rejected, 52% never returned to the public schools, and there were no subsequent data on them. And that group, many of whom probably went to pnivate schools under a privately funded voucher program, came from higher-income families with better-educated parents. What that means is that the rejected students who went back to the public schools and thus stayed in the "experiment" were undoubtedly less able than those who left. Even with these problems with the sample of rejected students, voucher advocates have had difficulty producing positive test score results for the choice students - but they have squeezed some out. Their efforts were extremely labored and were statistically significant only for math scores and for students remaining in the choice schools three or four years. These analyses suffered from several problems. Their models generally failed to control for prior achievement levels. We have retained that feature in all our studies because we believe that schools can be judged only on the value they add to education. That means not simply studying absolute scores but rather basing judgments on improvements in scores. In comparison to the large random sample of low-income MPS students, which we used as a primary control once the "reject sample" failed, the only positive results of any other studies were in math. The results were minimal, however, and the studies often did not control for either prior tests or background characteristics of students. Our value-added methods for four years of data, which included controlling for relevant differences between students, did not find any significant differences between choice and MPS students. (The one finding of ours that came close to standard levels of statistical significance favored MPS students on reading.) There was also no explanation by voucher advocates as to why the private schools should have been good at math education and not at reading. This is doubly curious because in our case studies of the private schools, many of the teachers reported concentrating on reading as the basic educational building block. A study by Jay Greene, Paul Peterson, and Jiangtao Du, released two days prior to Peterson's Wisconsin court testimony supporting the Milwaukee program, found a remarkable effect comparing choice students and rejected students.6 However, it was only for math and only for the few students who remained in choice schools for three or four years. The fourth-year effect was an astonishing difference of 11 NCEs between choice students and rejected students who first applied to the choice program in the fall of 1990, remained in school, and were tested in the spring of 1994. The result was more than one-half of a standard deviation in that last year alone. I would agree that, if this finding were an accurate indicator of the overall outcome of the choice program, it would be a miracle. No research of which I am aware has produced anything like this type of one-year gain in any education reform. If such results accrued over the course of a 12-year education, not only would the differences between white students and black students be eradicated (as Peterson has repeatedly claimed), but the differences between public school students and private school students would be more than six standard deviations by the end of high school. If that were true, no one in America would be advised to send their children to public schools. Unfortunately, the problems with this result are so numerous that I don't think anyone really believes it. At least, no one should. First, both sides of this "experiment" are contaminated. Many rejected students disappeared - actually, in my re-analysis of these data, only 27 rejected students remained in MPS in 1994. Second, because the remaining rejected students were less able students, those 27 students were way behind both the choice students and the MPS students from the first year (see Figure 3). Second, attrition among the choice students was extremely high from the beginning - and it also was not random. The private schools did not have to readmit students, and they did not have to report whom they refused to readmit. In our Fourth-Year Report, we analyzed attrition for the first four years and found that those students leaving the private schools had lower test scores than those returning. Thus the remaining group of choice students after four years was clearly a select group. When I analyzed the comparison between the choice students and the rejected students, I was able more or less to replicate the Greene, Peterson, and Du results. But I also studied the outcomes for all three groups,which their study did not report. What I found was that the 27 rejected students were not only behind choice students in math from the first year, but they were also well below a random sample of low-income MPS students who generally did better than the choice students (see Figure 3). In addition, the average scores of rejected students are misleading because it turns out that five of those 27 students received a "1" on the math portion of the ITBS in their fourth year (1994). That is the lowest score possible and much lower than their 1993 math scores (which averaged 30.2 and ranged from 22 to 36). Why these stark differences over a year? The most likely explanation for that result -- certainly among some of these students -- is that they simply did not take the test. That is, they probably signed their names and then failed to fill in the bubbles. Computerized scoring would give these tests a score of 1. In any event, when I took out those five students and two choice students scoring below five points on the test, the estimated math differences between choice participants and rejects were no longer significant. Thus dropping just seven students changed the results completly. Furtfier, when I compared the scores of rejected students with our random sample of low-income MPS students (who were MPS classmates of the rejected students and eligible for the choice program), the MPS students, by comparison, did even better than the choice students. And when I eliminated the lowest-achieving students from this comparison, the MPS students still did significantly better statistically than the rejected students on the math test. In short, the dramatic gains reported by the Greene/Peterson/Du study must be deemed invalid because this "natural experiment" was fatally flawed. The point is that the outcomes touted so widely by choice supporters are not the result of superior performance by choice students at all, but rather a result of the failure of a handful of rejected students who still remained in MPS. The less able choice students and more able rejected students dropped out, leaving after three and four years a highly nonrandom comparison that was stacked in favor of choice students. ' The Ugly In 1995 the Wisconsin legislature, at the behest of business interests in Milwaukee and some choice supporters, voted to expand the program to religious schools and to raise the cap on the number of students who can participate to 15,000. In addition, students in grades K-3 who already attended private schools could enroll in the choice program. The expanded law was first declared unconstitutional by a county circuit court but later upheld by the state supreme court on a 4-2 vote. The U.S. Supreme Court failed to take up the appeal, and thus the expanded program became the law of the land -- at least, in Wisconsin. The irony is that there is absolutely no evidence in my or any other study that would justify this expansion of the program. Parochial schools were not part of our study, and no data exist involving parochial schools. Our reports always contained a disclaimer that results could not be generalized,to a broader voucher program. The data limitations also apply to the analyses by voucher supporters, although they have not been so constrained in the implications they draw. Perhaps the ultimate irony is that, because the legislature chose to eliminate the annual evaluations when the program was expanded in 1995, no data are being or will be collected to determine how the new program and schools are doing.7 Finally, with the Supreme Court's de facto approval, the legislature may be unable to resist still further expansion of the program. Why, for example, should only Milwaukee students benefit, or only poor families? All the legislators in the state have a number of parochial school families in their districts. It will be very difficult for them to deny simple demands to be included. The Wisconsin Independent Schools Association has already instituted a statewide lobbying effort to expand the program to include all private schools and families. And if choice is expanded to private and parochial schools across the state, it will not be poor, inner-city, black, and Hispanic children who will benefit -- those for whom the original program was intended and did serve. Based on 1990 census data, 16% of Wisconsin's students attended private, mostly parochial schools. In Milwaukee, more than 84% of private school students were white, and their average family income was more than $40,000 annually. In contrast, only 33% of public school students were white, and their family income averaged approximately $25,000.8 Does anyone believe that these middle-and upper-middle-income private school families will be content to continue to pay tuition when everyone else is receiving either free public education or vouchers? So the ugliest outcome of the Milwaukee Parental Choice Program -- if we consider the program's initial purpose and intent -- may be yet to come. 1. Our website address is http://dpls.dacc.wisc.edu/choice/choice-index.html.
|