Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 4 |
Descriptor
Computation | 5 |
Test Items | 5 |
Weighted Scores | 5 |
Computer Assisted Testing | 2 |
Evaluation Methods | 2 |
Item Response Theory | 2 |
Models | 2 |
Statistical Analysis | 2 |
Student Evaluation | 2 |
Adaptive Testing | 1 |
Bayesian Statistics | 1 |
More ▼ |
Source
Applied Psychological… | 1 |
ETS Research Report Series | 1 |
Education Sciences | 1 |
Measurement & Evaluation in… | 1 |
Psychometrika | 1 |
Author
Chang, Hua-Hua | 2 |
Feldt, Leonard S. | 1 |
Ganzfried, Sam | 1 |
Jiang, Yanming | 1 |
Qian, Jiahe | 1 |
Shi, Ning-Zhong | 1 |
Sun, Shan-Shan | 1 |
Tao, Jian | 1 |
Ying, Zhiliang | 1 |
Yusuf, Farzana | 1 |
von Davier, Alina A. | 1 |
More ▼ |
Publication Type
Journal Articles | 5 |
Reports - Research | 3 |
Reports - Descriptive | 1 |
Reports - Evaluative | 1 |
Education Level
Higher Education | 1 |
Audience
Location
Laws, Policies, & Programs
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Ganzfried, Sam; Yusuf, Farzana – Education Sciences, 2018
A problem faced by many instructors is that of designing exams that accurately assess the abilities of the students. Typically, these exams are prepared several days in advance, and generic question scores are used based on rough approximation of the question difficulty and length. For example, for a recent class taught by the author, there were…
Descriptors: Weighted Scores, Test Construction, Student Evaluation, Multiple Choice Tests
Qian, Jiahe; Jiang, Yanming; von Davier, Alina A. – ETS Research Report Series, 2013
Several factors could cause variability in item response theory (IRT) linking and equating procedures, such as the variability across examinee samples and/or test items, seasonality, regional differences, native language diversity, gender, and other demographic variables. Hence, the following question arises: Is it possible to select optimal…
Descriptors: Item Response Theory, Test Items, Sampling, True Scores
Sun, Shan-Shan; Tao, Jian; Chang, Hua-Hua; Shi, Ning-Zhong – Applied Psychological Measurement, 2012
For mixed-type tests composed of dichotomous and polytomous items, polytomous items often yield more information than dichotomous items. To reflect the difference between the two types of items and to improve the precision of ability estimation, an adaptive weighted maximum-a-posteriori (WMAP) estimation is proposed. To evaluate the performance of…
Descriptors: Monte Carlo Methods, Computation, Item Response Theory, Weighted Scores
Chang, Hua-Hua; Ying, Zhiliang – Psychometrika, 2008
It has been widely reported that in computerized adaptive testing some examinees may get much lower scores than they would normally if an alternative paper-and-pencil version were given. The main purpose of this investigation is to quantitatively reveal the cause for the underestimation phenomenon. The logistic models, including the 1PL, 2PL, and…
Descriptors: Adaptive Testing, Computer Assisted Testing, Computation, Test Items

Feldt, Leonard S. – Measurement & Evaluation in Counseling & Development, 2004
In some settings, the validity of a battery composite or a test score is enhanced by weighting some parts or items more heavily than others in the total score. This article describes methods of estimating the total score reliability coefficient when differential weights are used with items or parts.
Descriptors: Test Items, Scoring, Cognitive Processes, Test Validity