Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 7 |
Since 2006 (last 20 years) | 13 |
Descriptor
Scores | 16 |
Test Format | 16 |
Comparative Analysis | 5 |
Test Construction | 5 |
Test Items | 5 |
Computer Assisted Testing | 4 |
Data Analysis | 4 |
Scoring | 4 |
Test Interpretation | 4 |
Educational Trends | 3 |
Evaluation Methods | 3 |
More ▼ |
Source
Author
Baldwin, Peter | 1 |
Boone, William J. | 1 |
Clariana, Roy B. | 1 |
Clauser, Brian E. | 1 |
DiVesta, Francis J. | 1 |
Eignor, Daniel R. | 1 |
Hendrickson, James M. | 1 |
Jianbin Fu | 1 |
Jin, Yan | 1 |
Kolen, Michael J. | 1 |
Lawrence, Ida M. | 1 |
More ▼ |
Publication Type
Reports - Descriptive | 16 |
Journal Articles | 11 |
Books | 1 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 4 |
Elementary Education | 3 |
Postsecondary Education | 3 |
Elementary Secondary Education | 2 |
Grade 12 | 2 |
Grade 4 | 2 |
Grade 8 | 2 |
High Schools | 2 |
Intermediate Grades | 2 |
Junior High Schools | 2 |
Middle Schools | 2 |
More ▼ |
Audience
Teachers | 2 |
Policymakers | 1 |
Practitioners | 1 |
Location
China | 1 |
Netherlands | 1 |
Laws, Policies, & Programs
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
National Assessment of… | 4 |
SAT (College Admission Test) | 2 |
New York State Regents… | 1 |
What Works Clearinghouse Rating
Jianbin Fu; Xuan Tan; Patrick C. Kyllonen – Journal of Educational Measurement, 2024
This paper presents the item and test information functions of the Rank two-parameter logistic models (Rank-2PLM) for items with two (pair) and three (triplet) statements in forced-choice questionnaires. The Rank-2PLM model for pairs is the MUPP-2PLM (Multi-Unidimensional Pairwise Preference) and, for triplets, is the Triplet-2PLM. Fisher's…
Descriptors: Questionnaires, Test Items, Item Response Theory, Models
Baldwin, Peter; Clauser, Brian E. – Journal of Educational Measurement, 2022
While score comparability across test forms typically relies on common (or randomly equivalent) examinees or items, innovations in item formats, test delivery, and efforts to extend the range of score interpretation may require a special data collection before examinees or items can be used in this way--or may be incompatible with common examinee…
Descriptors: Scoring, Testing, Test Items, Test Format
Wise, Steven L. – Education Inquiry, 2019
A decision of whether to move from paper-and-pencil to computer-based tests is based largely on a careful weighing of the potential benefits of a change against its costs, disadvantages, and challenges. This paper briefly discusses the trade-offs involved in making such a transition, and then focuses on a relatively unexplored benefit of…
Descriptors: Computer Assisted Testing, Cheating, Test Wiseness, Scores
Lynch, Sarah – Practical Assessment, Research & Evaluation, 2022
In today's digital age, tests are increasingly being delivered on computers. Many of these computer-based tests (CBTs) have been adapted from paper-based tests (PBTs). However, this change in mode of test administration has the potential to introduce construct-irrelevant variance, affecting the validity of score interpretations. Because of this,…
Descriptors: Computer Assisted Testing, Tests, Scores, Scoring
Boone, William J. – CBE - Life Sciences Education, 2016
This essay describes Rasch analysis psychometric techniques and how such techniques can be used by life sciences education researchers to guide the development and use of surveys and tests. Specifically, Rasch techniques can be used to document and evaluate the measurement functioning of such instruments. Rasch techniques also allow researchers to…
Descriptors: Item Response Theory, Psychometrics, Science Education, Educational Research
National Assessment Governing Board, 2017
The National Assessment of Educational Progress (NAEP) is the only continuing and nationally representative measure of trends in academic achievement of U.S. elementary and secondary school students in various subjects. For more than four decades, NAEP assessments have been conducted periodically in reading, mathematics, science, writing, U.S.…
Descriptors: Mathematics Achievement, Multiple Choice Tests, National Competency Tests, Educational Trends
National Assessment Governing Board, 2017
Since 1973, the National Assessment of Educational Progress (NAEP) has gathered information about student achievement in mathematics. Results of these periodic assessments, produced in print and web-based formats, provide valuable information to a wide variety of audiences. They inform citizens about the nature of students' comprehension of the…
Descriptors: Mathematics Tests, Mathematics Achievement, Mathematics Instruction, Grade 4
Kolen, Michael J.; Lee, Won-Chan – Educational Measurement: Issues and Practice, 2011
This paper illustrates that the psychometric properties of scores and scales that are used with mixed-format educational tests can impact the use and interpretation of the scores that are reported to examinees. Psychometric properties that include reliability and conditional standard errors of measurement are considered in this paper. The focus is…
Descriptors: Test Use, Test Format, Error of Measurement, Raw Scores
Eignor, Daniel R. – Educational Measurement: Issues and Practice, 2008
This article discusses a particular type of concordance table and the potential for test score misuse that may result from employing such a table. The concordance that is discussed is typically created between scores on different, nonequatable versions of a test that share the same or close to the same test title. These concordance tables often…
Descriptors: Scores, Tables (Data), Comparative Analysis, Equated Scores
Jin, Yan – Journal of Pan-Pacific Association of Applied Linguistics, 2011
The College English Test (CET) is an English language test designed for educational purposes, administered on a very large scale, and used for making high-stakes decisions. This paper discusses the key issues facing the CET during the course of its development in the past two decades. It argues that the most fundamental and critical concerns of…
Descriptors: High Stakes Tests, Language Tests, Measures (Individuals), Graduates
Schoonen, Rob; Verhallen, Marianne – Language Testing, 2008
The assessment of so-called depth of word knowledge has been the focus of research for some years now. In this article the construct of deep word knowledge is further specified as the decontextualized knowledge of word meanings and word associations. Most studies so far have involved adolescent and adult second language learners. In this article,…
Descriptors: Language Acquisition, Second Language Learning, Associative Learning, Foreign Countries
Shermis, Mark D.; DiVesta, Francis J. – Rowman & Littlefield Publishers, Inc., 2011
"Classroom Assessment in Action" clarifies the multi-faceted roles of measurement and assessment and their applications in a classroom setting. Comprehensive in scope, Shermis and Di Vesta explain basic measurement concepts and show students how to interpret the results of standardized tests. From these basic concepts, the authors then…
Descriptors: Student Evaluation, Standardized Tests, Scores, Measurement

Hendrickson, James M. – Hispania, 1992
Prochievement tests, hybrids of proficiency and achievement tests, assess students' linguistic and communicative competence and provide a means for formative evaluation of student progress. Assessing Spanish listening comprehension/speaking proficiency, creating listening/oral proficiency test formats, and scoring are described. Guidelines for…
Descriptors: Communicative Competence (Languages), Community Colleges, Language Tests, Listening Comprehension
Clariana, Roy B.; Wallace, Patricia – Journal of Educational Computing Research, 2007
This proof-of-concept investigation describes a computer-based approach for deriving the knowledge structure of individuals and of groups from their written essays, and considers the convergent criterion-related validity of the computer-based scores relative to human rater essay scores and multiple-choice test scores. After completing a…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Construct Validity, Cognitive Structures
Lawrence, Ida M.; Schmidt, Amy Elizabeth – College Entrance Examination Board, 2001
The SAT® I: Reasoning Test is administered seven times a year. Primarily for security purposes, several different test forms are given at each administration. How is it possible to compare scores obtained from different test forms and from different test administrations? The purpose of this paper is to provide an overview of the statistical…
Descriptors: Scores, Comparative Analysis, Standardized Tests, College Entrance Examinations
Previous Page | Next Page »
Pages: 1 | 2