NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)2
Since 2006 (last 20 years)4
Audience
Researchers85
Practitioners3
Teachers3
Students1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 85 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Aybek, Eren Can; Demirtasli, R. Nukhet – International Journal of Research in Education and Science, 2017
This article aims to provide a theoretical framework for computerized adaptive tests (CAT) and item response theory models for polytomous items. Besides that, it aims to introduce the simulation and live CAT software to the related researchers. Computerized adaptive test algorithm, assumptions of item response theory models, nominal response…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Ockey, Gary J.; Wagner, Elvis – Language Learning & Language Teaching, 2018
This book is relevant for language testers, listening researchers, and oral proficiency teachers, in that it explores four broad themes related to the assessment of L2 listening ability: the use of authentic, real-world spoken texts; the effects of different speech varieties of listening inputs; the use of audio-visual texts; and assessing…
Descriptors: Listening Comprehension, Second Language Learning, Second Language Instruction, Listening Comprehension Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Williams, Marian E.; Sando, Lara; Soles, Tamara Glen – Journal of Psychoeducational Assessment, 2014
Cognitive assessment of young children contributes to high-stakes decisions because results are often used to determine eligibility for early intervention and special education. Previous reviews of cognitive measures for young children highlighted concerns regarding adequacy of standardization samples, steep item gradients, and insufficient floors…
Descriptors: Intelligence Tests, Decision Making, High Stakes Tests, Eligibility
Nering, Michael L., Ed.; Ostini, Remo, Ed. – Routledge, Taylor & Francis Group, 2010
This comprehensive "Handbook" focuses on the most used polytomous item response theory (IRT) models. These models help us understand the interaction between examinees and test questions where the questions have various response categories. The book reviews all of the major models and includes discussions about how and where the models…
Descriptors: Guides, Item Response Theory, Test Items, Correlation
Peer reviewed Peer reviewed
Van Der Flier, Henk; And Others – Journal of Educational Measurement, 1984
Two strategies for assessing item bias are discussed: methods comparing item difficulties unconditional on ability and methods comparing probabilities of response conditional on ability. Results suggest that the iterative logit method is an improvement on the noniterative one and is efficient in detecting biased and unbiased items. (Author/DWH)
Descriptors: Algorithms, Evaluation Methods, Item Analysis, Scores
Ackerman, Terry A.; Spray, Judith A. – 1986
A model of test item dependency is presented and used to illustrate the effect that violations of local independence have on the behavior of item characteristic curves. The dependency model is flexible enough to simulate the interaction of a number of factors including item difficulty and item discrimination, varying degrees of item dependence,…
Descriptors: Difficulty Level, Item Analysis, Latent Trait Theory, Mathematical Models
PDF pending restoration PDF pending restoration
Reckase, Mark D.; McKinley, Robert L. – 1984
A new indicator of item difficulty, which identifies effectiveness ranges, overcomes the limitations of other item difficulty indexes in describing the difficulty of an item or a test as a whole and in aiding the selection of appropriate ability level items for a test. There are three common uses of the term "item difficulty": (1) the probability…
Descriptors: Difficulty Level, Evaluation Methods, Item Analysis, Latent Trait Theory
Ackerman, Terry A. – 1987
Concern has been expressed over the item response theory (IRT) assumption that a person's ability can be estimated in a unidimensional latent space. To examine whether or not the response to an item requires only a single latent ability, unidimensional ability estimates were compared for data generated from the multidimensional item response…
Descriptors: Ability, Computer Simulation, Difficulty Level, Item Analysis
Holland, Paul W.; Thayer, Dorothy T. – 1985
An alternative definition has been developed of the delta scale of item difficulty used at Educational Testing Service. The traditional delta scale uses an inverse normal transformation based on normal ogive models developed years ago. However, no use is made of this fact in typical uses of item deltas. It is simply one way to make the probability…
Descriptors: Difficulty Level, Error Patterns, Estimation (Mathematics), Item Analysis
Doolittle, Allen E. – 1983
The stability of selected indices for detecting differential item performance (item bias), from one randomly equivalent sample to another, is addressed. Some recent research has criticized these indices as too unreliable for utility in measuring bias in achievement test items. Using data from a national testing of the ACT Assessment, however, this…
Descriptors: Black Students, Item Analysis, Racial Factors, Reliability
Shannon, Gregory A. – 1983
Rescoring of Center for Occupational and Professional Assessment objective-referenced tests is decided largely by content experts selected by client organizations. A few of the test items, statistically flagged for review, are not rescored. Some of this incongruence could be due to the use of the biserial correlation (r-biserial) as an…
Descriptors: Adults, Criterion Referenced Tests, Item Analysis, Occupational Tests
Reckase, Mark D.; McKinley, Robert L. – 1984
The purpose of this paper is to present a generalization of the concept of item difficulty to test items that measure more than one dimension. Three common definitions of item difficulty were considered: the proportion of correct responses for a group of individuals; the probability of a correct response to an item for a specific person; and the…
Descriptors: Difficulty Level, Item Analysis, Latent Trait Theory, Mathematical Models
Chevalaz, Gerard M.; Tatsuoka, Kikumi K. – 1983
Two order theoretic techniques were presented and compared. Ordering theory of Krus and Bart (1974) and an extended Takeya's item relational structure analysis (IRS) by Tatsuoka and Tatsuoka (1981) were used to extract the hierarchical item structure from three datasets. Directed graphs were constructed and both methods were assessed as to how…
Descriptors: Comparative Analysis, Computer Simulation, Instructional Design, Item Analysis
Linacre, John M.; Wright, Benjamin D. – 1987
The Mantel-Haenszel (MH) procedure attempts to identify and quantify differential item performance (item bias). This paper summarizes the MH statistics, and identifies the parameters they estimate. An equivalent procedure based on the Rasch model is described. The theoretical properties of the two approaches are compared and shown to require the…
Descriptors: Algorithms, Estimation (Mathematics), Item Analysis, Measurement Techniques
Doolittle, Allen E. – 1984
The definition of differential item performance (DIP), often referred to as item bias, is discussed. DIP is suggested as a comprehensive term to encompass item bias (item invalidity which is unfair to certain population subgroups) and instructional bias (a valid reflection of group differences in instruction or background). This study investigated…
Descriptors: College Entrance Examinations, Higher Education, Item Analysis, Mathematics Achievement
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6