Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 7 |
Descriptor
Computation | 8 |
Error of Measurement | 8 |
Scoring | 8 |
Item Response Theory | 5 |
Scores | 4 |
Test Items | 4 |
Academic Standards | 3 |
Goodness of Fit | 3 |
Measures (Individuals) | 3 |
Psychometrics | 3 |
Statistical Bias | 3 |
More ▼ |
Source
ETS Research Report Series | 2 |
Educational and Psychological… | 2 |
New Mexico Public Education… | 2 |
Educational Testing Service | 1 |
Universal Journal of… | 1 |
Author
Atanasov, Dimitar V. | 1 |
Bulut, Okan | 1 |
Davey, Tim | 1 |
Dimitrov, Dimiter M. | 1 |
Dimoliatis, Ioannis D. K. | 1 |
Gorgun, Guher | 1 |
Griph, Gerald W. | 1 |
Haberman, Shelby J. | 1 |
Herbert, Erin | 1 |
Jelastopulu, Eleni | 1 |
Kim, Sooyeon | 1 |
More ▼ |
Publication Type
Reports - Research | 6 |
Journal Articles | 5 |
Numerical/Quantitative Data | 2 |
Reports - Descriptive | 2 |
Education Level
Elementary Secondary Education | 2 |
Higher Education | 2 |
Postsecondary Education | 2 |
Early Childhood Education | 1 |
Elementary Education | 1 |
Grade 2 | 1 |
Grade 3 | 1 |
Primary Education | 1 |
Audience
Location
New Mexico | 2 |
Laws, Policies, & Programs
Assessments and Surveys
Praxis Series | 1 |
What Works Clearinghouse Rating
Dimitrov, Dimiter M.; Atanasov, Dimitar V. – Educational and Psychological Measurement, 2021
This study presents a latent (item response theory--like) framework of a recently developed classical approach to test scoring, equating, and item analysis, referred to as "D"-scoring method. Specifically, (a) person and item parameters are estimated under an item response function model on the "D"-scale (from 0 to 1) using…
Descriptors: Scoring, Equated Scores, Item Analysis, Item Response Theory
Gorgun, Guher; Bulut, Okan – Educational and Psychological Measurement, 2021
In low-stakes assessments, some students may not reach the end of the test and leave some items unanswered due to various reasons (e.g., lack of test-taking motivation, poor time management, and test speededness). Not-reached items are often treated as incorrect or not-administered in the scoring process. However, when the proportion of…
Descriptors: Scoring, Test Items, Response Style (Tests), Mathematics Tests
Kim, Sooyeon; Moses, Tim; Yoo, Hanwook Henry – ETS Research Report Series, 2015
The purpose of this inquiry was to investigate the effectiveness of item response theory (IRT) proficiency estimators in terms of estimation bias and error under multistage testing (MST). We chose a 2-stage MST design in which 1 adaptation to the examinees' ability levels takes place. It includes 4 modules (1 at Stage 1, 3 at Stage 2) and 3 paths…
Descriptors: Item Response Theory, Computation, Statistical Bias, Error of Measurement
Dimoliatis, Ioannis D. K.; Jelastopulu, Eleni – Universal Journal of Educational Research, 2013
The surgical theatre educational environment measures STEEM, OREEM and mini-STEEM for students (student-STEEM) comprise an up to now disregarded systematic overestimation (OE) due to inaccurate percentage calculation. The aim of the present study was to investigate the magnitude of and suggest a correction for this systematic bias. After an…
Descriptors: Educational Environment, Scores, Grade Prediction, Academic Standards
Haberman, Shelby J.; Sinharay, Sadip; Puhan, Gautam – ETS Research Report Series, 2006
Recently, there has been an increasing level of interest in reporting subscores. This paper examines the issue of reporting subscores at an aggregate level, especially at the level of institutions that the examinees belong to. A series of statistical analyses is suggested to determine when subscores at the institutional level have any added value…
Descriptors: Scores, Statistical Analysis, Error of Measurement, Reliability
New Mexico Public Education Department, 2007
The purpose of the NMSBA technical report is to provide users and other interested parties with a general overview of and technical characteristics of the 2007 NMSBA. The 2007 technical report contains the following information: (1) Test development; (2) Scoring procedures; (3) Summary of student performance; (4) Statistical analyses of item and…
Descriptors: Interrater Reliability, Standard Setting, Measures (Individuals), Scoring
Rizavi, Saba; Way, Walter D.; Davey, Tim; Herbert, Erin – Educational Testing Service, 2004
Item parameter estimates vary for a variety of reasons, including estimation error, characteristics of the examinee samples, and context effects (e.g., item location effects, section location effects, etc.). Although we expect variation based on theory, there is reason to believe that observed variation in item parameter estimates exceeds what…
Descriptors: Adaptive Testing, Test Items, Computation, Context Effect
Griph, Gerald W. – New Mexico Public Education Department, 2006
The purpose of the NMSBA technical report is to provide users and other interested parties with a general overview of and technical characteristics of the 2006 NMSBA. The 2006 technical report contains the following information: (1) Test development; (2) Scoring procedures; (3) Calibration, scaling, and equating procedures; (4) Standard setting;…
Descriptors: Interrater Reliability, Standard Setting, Measures (Individuals), Scoring