NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Liu, Jinghua; Zu, Jiyun; Curley, Edward; Carey, Jill – ETS Research Report Series, 2014
The purpose of this study is to investigate the impact of discrete anchor items versus passage-based anchor items on observed score equating using empirical data.This study compares an "SAT"® critical reading anchor that contains more discrete items proportionally, compared to the total tests to be equated, to another anchor that…
Descriptors: Equated Scores, Test Items, College Entrance Examinations, Comparative Analysis
Hixson, Nate; Rhudy, Vaughn – West Virginia Department of Education, 2013
Student responses to the West Virginia Educational Standards Test (WESTEST) 2 Online Writing Assessment are scored by a computer-scoring engine. The scoring method is not widely understood among educators, and there exists a misperception that it is not comparable to hand scoring. To address these issues, the West Virginia Department of Education…
Descriptors: Scoring Formulas, Scoring Rubrics, Interrater Reliability, Test Scoring Machines
Peer reviewed Peer reviewed
Direct linkDirect link
Innes, Richard G. – Journal of School Choice, 2012
This article provides examples of how serious misconceptions can result when only "all student" scores from the National Assessment of Educational Progress (NAEP) are used for simplistic state-to-state comparisons. Suggestions for better treatment are presented. The article also compares Kentucky's eighth grade EXPLORE testing to NAEP…
Descriptors: National Competency Tests, Scoring, Misconceptions, Academic Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Schulz, Wolfram; Fraillon, Julian – Educational Research and Evaluation, 2011
When comparing data derived from tests or questionnaires in cross-national studies, researchers commonly assume measurement invariance in their underlying scaling models. However, different cultural contexts, languages, and curricula can have powerful effects on how students respond in different countries. This article illustrates how the…
Descriptors: Citizenship Education, International Studies, Item Response Theory, International Education
McGlynn, Angela Provitera – Education Digest: Essential Readings Condensed for Quick Review, 2008
A new report, "The Proficiency Illusion," released last year by the Thomas B. Fordham Institute states that the tests that states use to measure academic progress under the No Child Left Behind Act (NCLB) are creating a false impression of success, especially in reading and especially in the early grades. The report is a collaboration…
Descriptors: Federal Legislation, Academic Achievement, Rating Scales, Achievement Tests
Boldt, Robert F. – 1971
Scores from tests in the same battery are put on scales which are the "same" in some sense, so that certain interpretations are made easier. This is often done when scores for different tests are obtained on different population segments, especially such as newer, more varied batteries of test offerings. It is felt that traditional erroneous…
Descriptors: Comparative Analysis, Comparative Testing, Equated Scores, Measurement Techniques
ACT, Inc., 2005
One of the most challenging issues a state must resolve in designing a statewide standards and college readiness assessment is that of how student scores should be reported. The ACT is an effective and reliable measure of student readiness for college and work, but in some cases states may wish to augment the ACT with tests of their own design. In…
Descriptors: Academic Achievement, Raw Scores, Achievement Rating, School Readiness
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Des Brisay, Margaret – TESL Canada Journal, 1994
Data from the Canadian Test of English for Scholars and Trainees (CanTEST) are compared to data from the Test of English as a Foreign Language (TOEFL) to establish CanTEST as a valid admissions tool for English-as-a-Second Language college applicants. Data are taken from four groups of examinees who took both tests. (eight references) (LR)
Descriptors: Admission Criteria, Comparative Analysis, Comparative Testing, Correlation