Your Discussion at the District 2 Community Education Council Mathematics Forum

by Elizabeth Carson to Chancellor Klein
June 9, 2005

Copy to: Members of the CEC

1. To defend yourself against the concerns and criticism of mathematicians in regard to District 2 math programs, expressed by City College Mathematics Professor Stanley Ocken by referring to Uri Treisman as your source of mathematical expertise, views and advisory was weak and transparent.

One might reasonably ponder why, with the 15-20 distinguished mathematicians in our own community at NYC colleges and universities who are ready willing and able to provide much needed mathematical expertise to inform NYC mathematics programs, curriculum and policy, you and Levy before you, would go all the way to Texas to find a mathematician who supports fuzzy math. Of course we both know the answer to that

Treisman is hardly a disinterested party in all this. He's been allied with the District 2 math reform for years. I assume you are aware he promoted CMP (the constructivist middle school math program used in District 2 middle schools with NSF EHR grant support) in schools across Texas through the Dana Center and with multi-million dollar NSF EHR funding. Maybe you think that's a good thing. We do not. See Chapter 13, National Science Foundation Systemic Initiatives: How a small amount of federal money promotes ill-designed mathematics and science programs in K-12 and undermines local control of education, by Michael McKeown, David Klein, Chris Patterson.

Furthermore to try to suggest that Treisman's advisory on mathematics education equals, much less trumps the critical expertise of Chairs of the full set of mathematics departments at the CUNY senior colleges and a large group of distinguished mathematicians at the Courant Institute is disingenuous, irresponsible and ludicrous.

You have not duly consulted with mathematics experts and you are profoundly irresponsible, much to the detriment of NYC children, to utterly ignore the repeated appeals, interest, analysis and concerns of a large group of NYC mathematicians.

2. Your assertion that NSF's imprimatur through funding of NCTM reform math connotes something worthwhile ignores the high controversy around the integrity of decisions at the Education and Human Resources Directorate(EHR) within the NSF. Within the NSF the EHR division is looked on as an embarrassment. See letters and testimony before Congress (right column)

The NSF EHR has liberally funded research and development and implementation of all the still after all these years experimental and content deficient constructivist math programs including TERC and Everyday Math.

3. Ten years and approximately an equal number of billions later, the full research effort supported with NSF EHR dollars still has not produced a body of scientific research to justify the adoption of TERC and CMP by District 2 or Everyday Math and Impact Math by NYC.

These are the sad and revelatory findings of the National Research Council. A report released this year by the National Academies reveals all 13 NSF funded mathematics programs lack scientifically valid evaluation studies.

Among the programs without a body of research proving effectiveness are NYC's universal elementary program Everyday Math, and several others used in some waivered K-12 schools.

All three Manhattan District 2 math programs were identified as unproven: Investigations in Number, Data and Space (TERC), Connected Mathematics Project (CMP) Mathematics Modeling Our World (ARISE)

National Academies announcement:

"Evaluations of mathematics curricula provide important information for educators, parents, students and curriculum developers, but those conducted to date on 19 specific curricula fall short of the scientific standards necessary to gauge overall effectiveness, says a new report from the National Academies' Mathematical Sciences Education Board." - Latest News From the National Academies, May 18, 2004

The prepublication version of the full report is available and free to the public online: On Evaluating Curricular Effectiveness: Judging the Quality of K-12 Mathematics Evaluations, produced by the Mathematical Sciences Education Board, National Research Council of the National Academies.

4. Your statement claiming sound research behind the constructivist programs adopted in District 2 and city schools is of course false and is in fact listed as Myth # 10 in Ten Myths About Math Education And Why You Shouldn't Believe Them

5. TERC's content mediocrity is well documented, see our page on TERC at

6. As is Everyday Math at

7. Your Children First decision making charade is chronicled at

Please find attached below expert commentary (part of a larger email discussion) on the ARC study specifically which is most often referenced as the best extant study of Everyday Math and TERC (messages in reverse chronological order) The first comments are by a mathematician, which build on a main analysis written by a research scientist, which then follows

- Elizabeth Carson


Thanks for this excellent analysis. I think that in addition to the points you made, two other issuses should be raised:

1) What exactly were the "non-reform" curricula used in this study. If the textbooks are not explicitly identified, this study in meaningless. How do we know that the comparison wasn't really NCTM math books vs different NCTM math books? Why don't the investigators pick a formidible "opponent" program like the Singapore books or Saxon Math?

2) Were the schools named in this study? Typical practice in education is not to name the schools, in which case the study is worthless. The reader is simply unable to verify the results independently of the authors of the study, who usually have a vested interest in the outcome.

--------------------------- on the ARC center study. Feel free to share this with the others if you think they will find it helpful.

Below are my comments on a study conducted by ARC, one of the NSF-funded Centers for Teaching and Learning. This illustrates the problem associated with NSF's EHR programs. Information about ARC can be found here and the study can be found here.

This was a study of the efficacy of three NSF-funded, NCTM-based "reform curricula" on student learning. The three curricula evaluated were Everyday Math, Investigations in Number, Data and Space, and Trailblazers and students were evaluated in Illinois, Massachesetts, and Washington State.

The study matched students from schools using reform-curricula (for at least two years prior to the study) with "comparison" students (similar reading scores, socioeconomic standing, race, etc.) at schools not using reform curricula, and then evaluated performance on a standardized test.

In addition to the comparison students, the study also looked at a much larger group of students who were not matched by reading scores, race or socioeconomic standing, and who did not receive instruction from a reform program, and they called that group the "non-reform" group. One must be careful in looking at the data tables to compare the "reform" students with the "comparison" students, not the "non-reform" students, since the non-reform students differ from the reform students by a number of factors.

Interestingly, in the methods section, the study says that reading scores and socioeconomic standing have the greatest impact on performance, but in the summary of results, they credit all of the differences (the whopping under-2% difference!) in performance to the reform-based curricula. And, by labeling the non-matched group as the non-reform group, a cursory look at the graph without the benefit of reading the full methodology would lead one to believe that there is an enormous gap between reform and non-reform students based solely on the curricular differences.

In reality, among students matched for race, socioeconomic standing, and reading scores, the comparison students (those who were NOT instructed with an NSF curricula) performed almost equally to the reform students. So, whereas the reform students in IL in grade three scored 70% on the standardized test, the matched comparison group (using a non-reform curricula) scored 68.4%. There is no mention in the study of the impact of tutoring or supplemental education on students engaged in the funded program (often times NSF funding pays for after-school programs and for additional resource teachers in the classroom, so the positive effect can be the result of increased resources and not of the particular curricula or instructional methods in use), nor is there a control for schools receiving equal financial support from NSF for teacher professional development in the use of a non-reform curricula. So, students in the comparison schools may have been at a disadvantage simply because their teachers were not involved in special programs, their schools may have not received extra government funding, or because there weren't supplemental instruction programs or additional resource teachers available to them. So the slight difference in performance could be the result of a general positive effect associated with having additional resources. But, let's just assume that the differences are the result of curricular differences. I'm not sure that a performance difference of 1.6% justifies the millions of dollars that have been spent on developing, implementing, and doing teacher professional development for the reform curricula. I would say not. In each of the states analyzed, the difference between the reform group and the comparison group was less than 2%. So, millions of dollars later, we see an effect of less than 2%. Does that end justify the amount of money spent? And how much of that 2% difference was the result of outside instruction at Sylvan or Huntington Learning Center?

Beyond that, the reform folks say that the NSF-funded/NCTM-based reform curricula is particularly effective in improving the scores of low-functioning kids and of eliminating the achievement gap. Yet, when you look at the data in this study, what you see is that the performance profile among the reform students is EXACTLY the same as it is for the comparison students. That is, Asian kids do great, black kids do the worse, Hispanic kids do better than black kids but worse than white kids - oh, and by the way, the reform curriculum had absolutely no impact on Hispanic kids, and white kids do almost as well as Asian kids. So, certainly no positive effect on the achievement gap among kids who were taught with reform curricula.

And, if you look even closer, the reform students, like the comparison students, do better in the lower grades and perform worse as they move toward middle school. So the reform curriculum doesn't seem to have any positive effect on reversing the trend of diminishing performance as kids progress to higher grade levels.

These data also show that when the state changes the test, as was the case in Washington, scores change significantly - for both the reform and comparison groups.

Certainly the non-reform data shows a significant gap in performance when compared to either the reform or the comparison group, but as the study indicates in the methods section, this is the result of disparities in reading scores, race and socioeconomic standing rather than in curricular discrepancies.

So, I would say that the taxpayers have wasted hundreds of millions of dollars on the three curricula involved in the study - Everyday Math, Investigations in Number, Data and Space, and Math Trailblazers. And yet the NSF continues to fund teacher training, teacher professional development and supplemental instruction programs based SOLELY on these curricula.

Perhaps if NSF gave the same amount of money for curriculum implementation, teacher professional development, and after-school learning activities to the non-reform schools as they gave to the reform schools (a very important control that is always missing from NSF studies since they will fund ONLY reform curricula), we might see an even larger improvement in student performance when more traditional curricula are in use.

Return to the NYC HOLD main page or to the News page or to the Letters and Testimony page.