NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 3,961 to 3,975 of 9,530 results Save | Export
Wickett, Maryann; Hendrix-Martin, Eunice – Stenhouse Publishers, 2011
Multiple-choice testing is an educational reality. Rather than complain about the negative impact these tests may have on teaching and learning, why not use them to better understand your students' true mathematical knowledge and comprehension? Maryann Wickett and Eunice Hendrix-Martin show teachers how to move beyond the student's answer--right…
Descriptors: Mathematics Education, Multiple Choice Tests, Grade 2, Grade 3
Tian, Feng – ProQuest LLC, 2011
There has been a steady increase in the use of mixed-format tests, that is, tests consisting of both multiple-choice items and constructed-response items in both classroom and large-scale assessments. This calls for appropriate equating methods for such tests. As Item Response Theory (IRT) has rapidly become mainstream as the theoretical basis for…
Descriptors: Item Response Theory, Comparative Analysis, Equated Scores, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Thompson, Nathan A. – Practical Assessment, Research & Evaluation, 2011
Computerized classification testing (CCT) is an approach to designing tests with intelligent algorithms, similar to adaptive testing, but specifically designed for the purpose of classifying examinees into categories such as "pass" and "fail." Like adaptive testing for point estimation of ability, the key component is the…
Descriptors: Adaptive Testing, Computer Assisted Testing, Classification, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
Day, James; Bonn, Doug – Physical Review Special Topics - Physics Education Research, 2011
The Concise Data Processing Assessment (CDPA) was developed to probe student abilities related to the nature of measurement and uncertainty and to handling data. The diagnostic is a ten question, multiple-choice test that can be used as both a pre-test and post-test. A key component of the development process was interviews with students, which…
Descriptors: Multiple Choice Tests, Test Reliability, Physics, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Changjiang; Gierl, Mark J. – Journal of Educational Measurement, 2011
The purpose of this study is to apply the attribute hierarchy method (AHM) to a subset of SAT critical reading items and illustrate how the method can be used to promote cognitive diagnostic inferences. The AHM is a psychometric procedure for classifying examinees' test item responses into a set of attribute mastery patterns associated with…
Descriptors: Reading Comprehension, Test Items, Critical Reading, Protocol Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Gao, Lingyun; Rogers, W. Todd – Language Testing, 2011
The purpose of this study was to explore whether the results of Tree Based Regression (TBR) analyses, informed by a validated cognitive model, would enhance the interpretation of item difficulties in terms of the cognitive processes involved in answering the reading items included in two forms of the Michigan English Language Assessment Battery…
Descriptors: Test Items, Reading Tests, Item Analysis, Reading Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Lowrie, Tom; Diezmann, Carmel M.; Kay, Russell – Evaluation & Research in Education, 2011
The graphics-decoding proficiency (G-DP) instrument was developed as a screening test for the purpose of measuring students' (aged 8-11 years) capacity to solve graphics-based mathematics tasks. These tasks include number lines, column graphs, maps and pie charts. The instrument was developed within a theoretical framework which highlights the…
Descriptors: Screening Tests, Mathematics Achievement, Mathematical Aptitude, Graphs
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wen-Chung; Huang, Sheng-Yun – Educational and Psychological Measurement, 2011
The one-parameter logistic model with ability-based guessing (1PL-AG) has been recently developed to account for effect of ability on guessing behavior in multiple-choice items. In this study, the authors developed algorithms for computerized classification testing under the 1PL-AG and conducted a series of simulations to evaluate their…
Descriptors: Computer Assisted Testing, Classification, Item Analysis, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
Anthony, Jason L.; Williams, Jeffrey M.; Duran, Lillian K.; Gillam, Sandra Laing; Liang, Lan; Aghara, Rachel; Swank, Paul R.; Assel, Mike A.; Landry, Susan H. – Journal of Educational Psychology, 2011
This study describes the dimensionality and continuum of Spanish phonological awareness (PA) skills in 3- to 6-year-old children. A 3 x 4 factorial design crossed word structure of test items (word, syllable, phoneme) with task (blending multiple-choice, blending free-response, elision multiple-choice, elision free-response) to assess 12 PA…
Descriptors: Test Items, Early Intervention, Phonology, Phonological Awareness
Peer reviewed Peer reviewed
Direct linkDirect link
Conrad, Kendon J.; Iris, Madelyn; Ridings, John W.; Langley, Kate; Anetzberger, Georgia J. – Gerontologist, 2011
Purpose: This study tested key psychometric properties of the Older Adult Psychological Abuse Measure (OAPAM), one self-report scale of the Older Adult Mistreatment Assessment (OAMA). Design and Methods: Items and theory were developed in a prior concept mapping study. Subsequently, the measures were administered to 226 substantiated clients by 22…
Descriptors: Concept Mapping, Elder Abuse, Construct Validity, Field Tests
Peer reviewed Peer reviewed
Direct linkDirect link
DeCarlo, Lawrence T. – Applied Psychological Measurement, 2011
Cognitive diagnostic models (CDMs) attempt to uncover latent skills or attributes that examinees must possess in order to answer test items correctly. The DINA (deterministic input, noisy "and") model is a popular CDM that has been widely used. It is shown here that a logistic version of the model can easily be fit with standard software for…
Descriptors: Bayesian Statistics, Computation, Cognitive Tests, Diagnostic Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Mroch, Andrew A.; Suh, Youngsuk; Kane, Michael T.; Ripkey, Douglas R. – Measurement: Interdisciplinary Research and Perspectives, 2009
This study uses the results of two previous papers (Kane, Mroch, Suh, & Ripkey, this issue; Suh, Mroch, Kane, & Ripkey, this issue) and the literature on linear equating to evaluate five linear equating methods along several dimensions, including the plausibility of their assumptions and their levels of bias and root mean squared difference…
Descriptors: Equated Scores, Methods, Test Items, Differences
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Won-Chan; Brennan, Robert L.; Wan, Lei – Applied Psychological Measurement, 2009
For a test that consists of dichotomously scored items, several approaches have been reported in the literature for estimating classification consistency and accuracy indices based on a single administration of a test. Classification consistency and accuracy have not been studied much, however, for "complex" assessments--for example,…
Descriptors: Classification, Reliability, Test Items, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Dolan, Conor V.; Oort, Frans J.; Stoel, Reinoud D.; Wicherts, Jelte M. – Structural Equation Modeling: A Multidisciplinary Journal, 2009
We propose a method to investigate measurement invariance in the multigroup exploratory factor model, subject to target rotation. We consider both oblique and orthogonal target rotation. This method has clear advantages over other approaches, such as the use of congruence measures. We demonstrate that the model can be implemented readily in the…
Descriptors: Test Items, Psychology, Models, College Students
Peer reviewed Peer reviewed
Direct linkDirect link
Miller, G. Edward; Fitzpatrick, Steven J. – Educational and Psychological Measurement, 2009
Incorrect handling of item parameter drift during the equating process can result in equating error. If the item parameter drift is due to construct-irrelevant factors, then inclusion of these items in the estimation of the equating constants can be expected to result in equating error. On the other hand, if the item parameter drift is related to…
Descriptors: Equated Scores, Computation, Item Response Theory, Test Items
Pages: 1  |  ...  |  261  |  262  |  263  |  264  |  265  |  266  |  267  |  268  |  269  |  ...  |  636