Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 3 |
Descriptor
Ability | 19 |
Difficulty Level | 19 |
Test Construction | 19 |
Test Items | 14 |
Adaptive Testing | 8 |
Item Response Theory | 7 |
Computer Assisted Testing | 5 |
Measurement Techniques | 5 |
Guessing (Tests) | 4 |
Item Analysis | 4 |
Item Banks | 4 |
More ▼ |
Source
Educational and Psychological… | 2 |
Applied Measurement in… | 1 |
CBE - Life Sciences Education | 1 |
Journal of Applied Measurement | 1 |
Journal of Research in… | 1 |
ProQuest LLC | 1 |
Psychometrika | 1 |
Author
Lord, Frederic M. | 2 |
Reckase, Mark D. | 2 |
Reese, Lynda M. | 2 |
Schnipke, Deborah L. | 2 |
Weiss, David J. | 2 |
Anderson, Trevor R. | 1 |
Berger, Martijn P. F. | 1 |
Betz, Nancy E. | 1 |
Bickel, Peter | 1 |
Buyske, Steven | 1 |
Chang, Huahua | 1 |
More ▼ |
Publication Type
Reports - Research | 8 |
Journal Articles | 6 |
Reports - Evaluative | 6 |
Speeches/Meeting Papers | 5 |
Dissertations/Theses -… | 1 |
Guides - Non-Classroom | 1 |
Information Analyses | 1 |
Reports - Descriptive | 1 |
Education Level
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Location
South Carolina | 1 |
Laws, Policies, & Programs
Assessments and Surveys
ACT Assessment | 1 |
What Works Clearinghouse Rating
Dasgupta, Annwesa P.; Anderson, Trevor R.; Pelaez, Nancy J. – CBE - Life Sciences Education, 2016
Researchers, instructors, and funding bodies in biology education are unanimous about the importance of developing students' competence in experimental design. Despite this, only limited measures are available for assessing such competence development, especially in the areas of molecular and cellular biology. Also, existing assessments do not…
Descriptors: Biology, Research Design, Science Tests, Student Evaluation
He, Wei; Reckase, Mark D. – Educational and Psychological Measurement, 2014
For computerized adaptive tests (CATs) to work well, they must have an item pool with sufficient numbers of good quality items. Many researchers have pointed out that, in developing item pools for CATs, not only is the item pool size important but also the distribution of item parameters and practical considerations such as content distribution…
Descriptors: Item Banks, Test Length, Computer Assisted Testing, Adaptive Testing
Rao, Vasanthi – ProQuest LLC, 2012
In 1997, based on the amendments to Individuals with Disabilities Education Act (IDEA), all states were faced with a statutory requirement to develop and implement alternate assessments for students with disabilities unable to participate in the statewide large-scale assessment. States were given the challenge of creating, implementing, and…
Descriptors: Alternative Assessment, Psychometrics, Item Response Theory, Models

Linacre, John M.; Wright, Benjamin D. – Journal of Applied Measurement, 2002
Describes an extension to the Rasch model for fundamental measurement in which there is parameterization not only for examinee ability and item difficulty but also for judge severity. Discusses variants of this model and judging plans, and explains its use in an empirical testing situation. (SLD)
Descriptors: Ability, Difficulty Level, Evaluators, Item Response Theory

Bickel, Peter; Buyske, Steven; Chang, Huahua; Ying, Zhiliang – Psychometrika, 2001
Examined the assumption that matching difficulty levels of test items with an examinee's ability makes a test more efficient and challenged this assumption through a class of one-parameter item response theory models. Found the validity of the fundamental assumption to be closely related to the van Zwet tail ordering of symmetric distributions (W.…
Descriptors: Ability, Difficulty Level, Item Response Theory, Test Construction
Gershon, Richard C. – 1991
The Johnson O'Connor Research Foundation, which produces vocabulary instructional materials for test takers, is in the process of determining the difficulty values of nontechnical words in the English language. To this end, the Foundation writes test items for vocabulary words and tests them in schools. The items are then calibrated using the…
Descriptors: Ability, Difficulty Level, Goodness of Fit, Item Response Theory
Zhu, Daming; Fan, Meichu – 1999
The convention for selecting starting points (that is, initial items) on a computerized adaptive test (CAT) is to choose as starting points items of medium difficulty for all examinees. Selecting a starting point based on prior information about an individual's ability was first suggested many years ago, but has been believed unimportant provided…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Difficulty Level
Reese, Lynda M.; Schnipke, Deborah L. – 1999
A two-stage design provides a way of roughly adapting item difficulty to test-taker ability. All test takers take a parallel stage-one test, and based on their scores, they are routed to tests of different difficulty levels in the second stage. This design provides some of the benefits of standard computer adaptive testing (CAT), such as increased…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Difficulty Level
Prestwood, J. Stephen; Weiss, David J. – 1977
The accuracy with which testees perceived the difficulty of ability-test items was investigated by administering two 41-item conventional tests on verbal ability. High- and low-ability groups responded to test items by choosing the correct alternative and then rating each item's difficulty relative to their levels of ability. Least-squares…
Descriptors: Ability, Difficulty Level, Higher Education, Item Analysis

Feldt, Leonard S. – Applied Measurement in Education, 1993
The recommendation that the reliability of multiple-choice tests will be enhanced if the distribution of item difficulties is concentrated at approximately 0.50 is reinforced and extended in this article by viewing the 0/1 item scoring as a dichotomization of an underlying normally distributed ability score. (SLD)
Descriptors: Ability, Difficulty Level, Guessing (Tests), Mathematical Models
Schnipke, Deborah L.; Reese, Lynda M. – 1997
Two-stage and multistage test designs provide a way of roughly adapting item difficulty to test-taker ability. All test takers take a parallel stage-one test, and, based on their scores, they are routed to tests of different difficulty levels in subsequent stages. These designs provide some of the benefits of standard computerized adaptive testing…
Descriptors: Ability, Adaptive Testing, Algorithms, Comparative Analysis

Lord, Frederic M. – Educational and Psychological Measurement, 1971
Descriptors: Ability, Adaptive Testing, Computer Oriented Programs, Difficulty Level
Veerkamp, Wim J. J.; Berger, Martijn P. F. – 1994
Items with the highest discrimination parameter values in a logistic item response theory (IRT) model do not necessarily give maximum information. This paper shows which discrimination parameter values (as a function of the guessing parameter and the distance between person ability and item difficulty) give maximum information for the…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
Huntley, Renee M.; Miller, Sherri – 1994
Whether the shaping of test items can itself result in qualitative differences in examinees' comprehension of reading passages was studied using the Pearson-Johnson item classification system. The specific practice studied incorporated, within an item stem line, references that point the examinee to a specific location within a reading passage.…
Descriptors: Ability, Classification, Difficulty Level, High School Students
Reckase, Mark D. – 1974
An application of the two-paramenter logistic (Rasch) model to tailored testing is presented. The model is discussed along with the maximum likelihood estimation of the ability parameters given the response pattern and easiness parameter estimates for the items. The technique has been programmed for use with an interactive computer terminal. Use…
Descriptors: Ability, Adaptive Testing, Computer Assisted Instruction, Difficulty Level
Previous Page | Next Page ยป
Pages: 1 | 2