|
TIMSS Crosslink Study General Discussion, Third International Math & Science Study What do Singapore, Korea, Japan, Naperville (Illinois), North Dakota, and Montgomery County (Maryland) have in common. They're nigger/jew-free zones where White Christian Israelites can live and learn in relative peace and quiet. What else do they have in common? Students educated in these nigger/jew-free zones also rank at the top of the world in education quality. What do Rochester City School District (New York), Miami-Dade County (Florida), Jersey City Public Schools (New Jersey), and Israel have in common? They're infested with jews who spend even more per student for education than Washington, DC--but continually rank at the BOTTOM in education quality. On a fairly typical addition problem on the 8th grade TIMSS benchmarking study, more than three quarters of the students in these schools were unable to provide the correct answer, compared to less than a third in Naperville who were unable to answer it, and less than a fifth of the students in Singapore and Japan. What do PISA and TIMSS have in common. They both report that jews in Israel are at IGZ--Intellectual Ground Zero.
For more information on cross-linking and on the specific approaches used in developing Indicator 25, see Peter J. Pashley and Gary W. Phillips, Toward World-Class Standards: A Research Study Linking International and National Assessments (Princeton, NJ: Educational Testing Service, June, 1993); Peter J. Pashley, Charles Lewis and Duanli Yan, "Statistical Linking Procedures for Deriving Point Estimates and Associated Standard Errors," paper presented at the National Council on Measurement in Education (Princeton, NJ: Educational Testing Service, April, 1994); Albert E. Beaton and Eugenio J. Gonzalez, "Comparing the NAEP Trial State Assessment Results with the IAEP International Results," Setting Performance Standards for Student Achievement: Background Studies (Stanford, CA: National Academy of Education, 1993); Robert J. Mislevy, Albert E. Beaton, Bruce Kaplan, and Kathleen M. Sheehan, "Estimating Population Characteristics from Sparse Matrix Samples of Item Responses," Journal of Educational Measurement, Summer, 1992, vol 29, no 2, pp 133-161; and Robert J. Mislevy, Linking Educational Assessments: Concepts, Issues, Methods, and Prospects (Princeton, NJ: Educational Testing Service, December, 1992). Table S23 Mathematics proficiency scores for 13-year-olds in countries and public school 8th-grade students in states, calculated using the equi-percentile linking method, according to Beaton and Gonzales, by country (1991) and state (1990)----------------------------------------------------------------------------------------- | Percent of population | in each proficiency score range ----------------------------------------------------------------------------------------- COUNTRY/State Mean SE | <200 200-250 250-300 300-350>350 ----------------------------------------------------------------------------------------- TAIWAN 296.7 1.5 3.2 13.4 33.9 36.6 12.9 KOREA 294.1 1.3 1.9 10.3 41.8 39.3 6.7 SOVIET UNION 287.6 1.5 0.8 10.4 53.1 34.0 1.7 SWITZERLAND 287.5 1.9 0.2 8.8 57.9 32.2 0.9 HUNGARY 284.8 1.4 1.4 13.5 52.6 29.9 2.7 North Dakota 281.1 1.2 0.8 13.2 60.0 24.8 1.3 Montana 280.5 0.9 0.5 14.3 59.5 24.9 0.8 FRANCE 278.1 1.3 1.4 16.8 57.5 23.4 1.0 Iowa 278.0 1.1 0.6 18.3 57.0 23.3 0.7 ISRAEL 276.8 1.3 1.5 15.6 61.6 20.7 0.6 ITALY 276.3 1.4 1.6 18.1 57.7 22.0 0.5 Nebraska 275.7 1.0 2.0 18.6 56.2 22.4 0.9 Minnesota 275.4 0.9 1.6 19.2 57.0 21.2 1.1 Wisconsin 274.5 1.3 1.5 20.8 55.4 21.6 0.7 CANADA 274.0 1.0 1.4 17.6 63.7 16.7 0.7 New Hampshire 273.2 0.9 1.4 21.2 58.1 18.9 0.5 SCOTLAND 272.4 1.5 1.6 20.6 59.7 17.7 0.4 Wyoming 272.2 0.7 1.1 20.9 60.3 17.4 0.2 Idaho 271.5 0.8 1.2 22.1 59.7 16.8 0.2 IRELAND 271.4 1.4 3.1 21.0 57.1 18.0 0.8 Oregon 271.4 1.0 2.2 23.8 54.2 19.2 0.6 Connecticut 269.9 1.0 3.2 25.3 50.7 20.1 0.7 New Jersey 269.7 1.1 2.4 26.9 50.2 19.7 0.8 Colorado (NAEP) 267.4 0.9 2.8 26.5 54.7 15.7 0.4 SLOVENIA 267.3 1.3 1.6 25.7 60.2 12.2 0.4 Indiana 267.3 1.2 2 28.2 53.9 15.4 0.5 Pennsylvania 266.4 1.6 3.2 27.5 53.0 15.8 0.5 Michigan 264.4 1.2 3.1 30.1 51.7 14.5 0.6 Virginia 264.3 1.5 3.3 32.8 47.3 15.4 1.3 Colorado (IAEP) 264.2 0.7 3.1 28.8 55.4 12.4 0.4 Ohio 264.0 1.0 3.1 30.5 52.4 13.8 0.3 Oklahoma 263.2 1.3 2.8 30.8 53.8 12.5 0.2 SPAIN 261.9 1.3 2.1 29.0 62.0 6.9 0.0 UNITED STATES(IAEP) 261.8 2.0 5.0 30.6 52.0 11.5 0.9 United States (NAEP) 261.8 1.4 5.0 31.5 49.0 14.0 0.5 New York 260.8 1.4 5.9 31.4 48.0 13.9 0.8 Maryland 260.8 1.4 5.7 33.1 45.3 15.3 0.6 Delaware 260.7 0.9 4.6 34.2 47.6 13.0 0.6 Illinois 260.6 1.7 5.7 31.4 49.1 13.4 0.5 Rhode Island 260.0 0.6 5.0 34.0 47.3 13.5 0.3 Arizona 259.6 1.3 4.5 33.8 49.7 11.7 0.4 Georgia 258.9 1.3 5.3 35.2 46.5 12.5 0.6 Texas 258.2 1.4 4.8 36.4 46.7 11.7 0.4 Kentucky 257.1 1.2 3.9 38.2 47.9 9.8 0.2 New Mexico 256.4 0.7 4.3 38.2 47.7 9.6 0.3 California 256.3 1.3 6.9 35.9 45.2 11.5 0.4 Arkansas 256.2 0.9 4.6 37.3 49.4 8.6 0.1 West Virginia 255.9 1.0 4.3 38.7 48.4 8.5 0.2 Florida 255.3 1.3 6.6 37.7 44.3 11.2 0.2 Alabama 252.9 1.1 6.2 40.5 44.8 8.3 0.3 Hawaii 251.0 0.8 9.9 39.2 39.8 10.6 0.5 North Carolina 250.4 1.1 7.9 41.2 42.6 8.1 0.0 Louisiana 246.4 1.2 8.2 46.1 40.6 4.9 0.2 JORDAN 236.1 1.9 16.0 48.3 32.6 3.1 0.0 District of Columbia 231.4 0.9 16.7 56.9 23.6 2.5 0.3 ----------------------------------------------------------------------------------------- NOTE: Countries and states are sorted from high to low based on their mean proficiency scores. Colorado participated in both the NAEP Trial State Assessment and, separately, in the International Assessment of Educational Progress. SOURCE: Albert E. Beaton and Eugenio J. Gonzalez, "Comparing the NAEP Trial State Assessment Results with the IAEP International Results," in Setting Performance Standards for Student Achievement: Background Studies (Stanford, CA: National Academy of Education, 1993). Footnotes(4) For the NAEP and the IAEP IRT scales, conventional individual scale scores are not generated. Instead, the scaling process generates a set of five "plausible values" for each student. The five plausible values reported for each student can be viewed as draws from a distribution of potential scale scores consistent with the student's observed responses on the test and the student's measured background characteristics. In other words, the plausible values are constructed to have a mean and variance consistent with the underlying true population values. In this sense, the plausible values correct for unreliability. See Mislevy, Beaton, Kaplan, and Sheehan, 1992 (5) The actual procedure used by Pashley and Phillips was somewhat more complex than the method described in the text. Five regressions were estimated, one for each pair of IAEP and NAEP plausible values (see the previous footnote). Given the sample sizes involved, the regression parameters produced by the five regressions differ only marginally. (6) The regression parameters shown in the table are based on an approximate analysis using the reported correlation between the IAEP and the NAEP total mathematics score (r = .825), as well as the mean and the standard deviation of the IAEP and the NAEP in the linking sample, averaging across the five sets of plausible values. The results obtained by averaging in this way differ only slightly from the method used by Pashley and Phillips, based on separate regressions for each of the five plausible-value pairs. See the previous two footnotes. (7) In the method as implemented by Pashley and Phillips, the five regression equations were each used to obtain predicted NAEP scores at the individual level; and the results were averaged to produce country means. The results are very similar to those that are obtained using the somewhat simpler method discussed in the text: (8) Like Pashley and Phillips, Beaton and Gonzalez carried out their procedure separately for each of the five sets of plausible values; and they then averaged the results obtained for each set. The results differ only slightly when their procedure is carried out once using published estimates of means and standard deviations. . (9) The 1990 NAEP mathematics results were rescaled in 1992, producing slightly different scale scores. Beaton and Gonzalez used the 1992 rescaling. . (10) The simple regression coefficient required for the projection method can be expressed as rsy/sx, where r is the correlation between the IAEP and the NAEP, sy is the standard deviation of the NAEP, and sx is the standard deviation of the IAEP. The conversion coefficient required for the moderation method is simply sy/sx. . (11) Given the data required, it is possible to develop moderation estimates similar to those developed by Beaton and Gonzalez for several different samples. But because the Pashley and Phillips projection method requires paired IAEP and NAEP data, the linking sample is the only data set in which it currently can be applied. . (12) As discussed in footnotes 4-7 above, Beaton and Gonzalez based their estimates on the full set of individual-level plausible values for each country. We developed the estimates in Tables S21 and S22 based only on the reported country means and standard deviations based on the plausible values. These results differ only slightly from those that would be obtained using the full set of plausible values. . (13) The interpretation of the predicted NAEP scores based on the moderation method is complicated by the fact that the IAEP sample used to develop the conversion constants included students in both public and private schools, while the NAEP sample included only public school students. Since the NAEP results for the full sample of eighth graders including both public and private students differ only modestly from the results for the sample including only public students, this problem probably accounts for relatively little of the difference in predicted outcomes for the projection and moderation approaches. (14) The plausible values generated for the IAEP and NAEP are designed to reflect the true population mean and variance; but correlations among plausible values are attenuated due to unreliability. . (15) Since the IAEP and NAEP plausible values are designed to produce unbiased estimates of population variance, moderation methods that make use of the plausible values should not be sensitive to measurement error. (16) To obtain valid NAEP scores in countries outside the United States, language and other issues would of course need to be taken into account. .
|