NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 1 to 15 of 47 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Baldwin, Peter; Clauser, Brian E. – Journal of Educational Measurement, 2022
While score comparability across test forms typically relies on common (or randomly equivalent) examinees or items, innovations in item formats, test delivery, and efforts to extend the range of score interpretation may require a special data collection before examinees or items can be used in this way--or may be incompatible with common examinee…
Descriptors: Scoring, Testing, Test Items, Test Format
Berman, Amy I.; Haertel, Edward H.; Pellegrino, James W. – National Academy of Education, 2020
This National Academy of Education (NAEd) volume provides guidance to key stakeholders on how to accurately report and interpret comparability assertions concerning large-scale educational assessments as well as how to ensure greater comparability by paying close attention to key aspects of assessment design, content, and procedures. The goal of…
Descriptors: Educational Assessment, Educational Testing, Scores, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Jerrim, John – Assessment in Education: Principles, Policy & Practice, 2016
The Programme for International Assessment (PISA) is an important cross-national study of 15-year olds academic achievement. Although it has traditionally been conducted using paper-and-pencil tests, the vast majority of countries will use computer-based assessment from 2015. In this paper, we consider how cross-country comparisons of children's…
Descriptors: Foreign Countries, Achievement Tests, International Assessment, Secondary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Berliner, David C. – Teachers College Record, 2015
Trying to understand PISA is analogous to the parable of the blind men and the elephant. There are many facets of the PISA program, and thus many ways to both applaud and critique this ambitious international program of assessment that has gained enormous importance in the crafting of contemporary educational policy. One of the facets discussed in…
Descriptors: Achievement Tests, Standardized Tests, Educational Assessment, Educational Indicators
Peer reviewed Peer reviewed
Direct linkDirect link
Clemens, Nathan H.; Davis, John L.; Simmons, Leslie E.; Oslund, Eric L.; Simmons, Deborah C. – Journal of Psychoeducational Assessment, 2015
Standardized measures are often used as an index of students' reading comprehension and scores have important implications, particularly for students who perform below expectations. This study examined secondary-level students' patterns of responding and the prevalence and impact of non-attempted items on a timed, group-administered,…
Descriptors: Secondary School Students, Performance Based Assessment, Multiple Choice Tests, Reading Comprehension
Peer reviewed Peer reviewed
Direct linkDirect link
Gandy, Sandra E. – Reading & Writing Quarterly, 2013
With the increasing amount of testing taking place in classrooms, teachers may question how appropriate those assessments are for the growing numbers of English language learners (ELLs) in the United States. One of the assessment options for classroom teachers is the informal reading inventory (IRI), which is the most frequently used assessment…
Descriptors: Informal Reading Inventories, English Language Learners, Student Evaluation, Standardized Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Kolen, Michael J.; Lee, Won-Chan – Educational Measurement: Issues and Practice, 2011
This paper illustrates that the psychometric properties of scores and scales that are used with mixed-format educational tests can impact the use and interpretation of the scores that are reported to examinees. Psychometric properties that include reliability and conditional standard errors of measurement are considered in this paper. The focus is…
Descriptors: Test Use, Test Format, Error of Measurement, Raw Scores
Spalding, Audrey – Mackinac Center for Public Policy, 2014
The 2014 Michigan Public High School Context and Performance Report Card is the Mackinac Center's second effort to measure high school performance. The first high school assessment was published in 2012, followed by the Center's 2013 elementary and middle school report card, which used a similar methodology to evaluate school performance. The…
Descriptors: Academic Achievement, Achievement Rating, Comparative Analysis, Comparative Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yen, Wendy M.; Lall, Venessa F.; Monfils, Lora – ETS Research Report Series, 2012
Alternatives to vertical scales are compared for measuring longitudinal academic growth and for producing school-level growth measures. The alternatives examined were empirical cross-grade regression, ordinary least squares and logistic regression, and multilevel models. The student data used for the comparisons were Arabic Grades 4 to 10 in…
Descriptors: Foreign Countries, Scaling, Item Response Theory, Test Interpretation
Peer reviewed Peer reviewed
Direct linkDirect link
Bramley, Tom; Gill, Tim – Research Papers in Education, 2010
The rank-ordering method for standard maintaining was designed for the purpose of mapping a known cut-score (e.g. a grade boundary mark) on one test to an equivalent point on the test score scale of another test, using holistic expert judgements about the quality of exemplars of examinees' work (scripts). It is a novel application of an old…
Descriptors: Scores, Psychometrics, Measurement Techniques, Foreign Countries
Stoneberg, Bert D. – Online Submission, 2009
Test developers are responsible to define how test scores should be interpreted and used. The No Child Left Behind Act of 2001 (NCLB) directed the Secretary of Education to use results from the National Assessment of Educational Progress (NAEP) to confirm the proficiency scores from state developed tests. There are two sets of federal definitions…
Descriptors: National Competency Tests, State Programs, Achievement Tests, Scores
Peer reviewed Peer reviewed
Schmidt, Amy E. – NASSP Bulletin, 2001
Challenges many of the assertions, inferences, and recommendations of the Merchant and Paulson article "State Comparisons of SAT Scores: Who's Your Test Taker?". (Contains five references.) (PKP)
Descriptors: Comparative Analysis, Elementary Secondary Education, Scores, Test Interpretation
Peer reviewed Peer reviewed
Powell, Brian; Steelman, Lala Carr – Harvard Educational Review, 1996
Updates an earlier study by reanalyzing interstate variations in Scholastic Aptitude Test and American College Test scores. Reaffirms the conclusion that state rankings based on such scores change dramatically when adjusted for number of test takers or class rank of test-taking population. Finds that public expenditures are positively related to…
Descriptors: Comparative Analysis, Educational Quality, Scores, State Programs
Callahan, Joseph P. – 1995
Clinicians use a common practice of reporting age or grade equivalent derived scores, but problems of interpretation exist for such scores. This article examines derived comparison score issues and recommends use of scores of relative standing such as percentile ranks or standard scores in contrast to use of developmental scores like age and grade…
Descriptors: Age Differences, Clinical Diagnosis, Comparative Analysis, Instructional Program Divisions
Peer reviewed Peer reviewed
Marco, Gary L.; Abdel-Fattah, A. A. – College and University, 1991
A study to develop concordance tables between the new enhanced American College Testing Program (ACT) assessment and the Scholastic Aptitude Test (SAT), to establish new score relationships, is described. The scaling sample consisted of 40,051 students taking both tests and submitting them to 14 large universities. (Author/MSE)
Descriptors: College Admission, College Entrance Examinations, Comparative Analysis, Higher Education
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4