NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 5,626 to 5,640 of 9,552 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Paek, Insu; Young, Michael J. – Applied Measurement in Education, 2005
When the item response theory (IRT) model uses the marginal maximum likelihood estimation, person parameters are usually treated as random parameters following a certain distribution as a prior distribution to estimate the structural parameters in the model. For example, both PARSCALE (Muraki & Bock, 1999) and BILOG 3 (Mislevy & Bock,…
Descriptors: Item Response Theory, Test Items, Maximum Likelihood Statistics, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Allen, Nancy L.; Holland, Paul W.; Thayer, Dorothy T. – Journal of Educational Measurement, 2005
Allowing students to choose the question(s) that they will answer from among several possible alternatives is often viewed as a mechanism for increasing fairness in certain types of assessments. The fairness of optional topic choice is not a universally accepted fact, however, and various studies have been done to assess this question. We examine…
Descriptors: Test Theory, Test Items, Student Evaluation, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Gierl, Mark J. – Educational Measurement: Issues and Practice, 2005
In this paper I describe and illustrate the Roussos-Stout (1996) multidimensionality-based DIF analysis paradigm, with emphasis on its implication for the selection of a matching and studied subtest for DIF analyses. Standard DIF practice encourages an exploratory search for matching subtest items based on purely statistical criteria, such as a…
Descriptors: Models, Test Items, Test Bias, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Dodeen, Hamzeh – Journal of Educational Measurement, 2004
The effect of item parameters (discrimination, difficulty, and level of guessing) on the item-fit statistic was investigated using simulated dichotomous data. Nine tests were simulated using 1,000 persons, 50 items, three levels of item discrimination, three levels of item difficulty, and three levels of guessing. The item fit was estimated using…
Descriptors: Item Response Theory, Difficulty Level, Test Items, Guessing (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Hewitt, Margaret A.; Homan, Susan P. – Reading Research and Instruction, 2004
Test validity issues considered by test developers and school districts rarely include individual item readability levels. In this study, items from a major standardized test were examined for individual item readability level and item difficulty. The Homan-Hewitt Readability Formula was applied to items across three grade levels. Results of…
Descriptors: Test Validity, Test Items, Standardized Tests, Readability Formulas
Peer reviewed Peer reviewed
Direct linkDirect link
Rickard, Timothy C. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2004
This article investigates the transition to memory-based performance that commonly occurs with practice on tasks that initially require use of a multistep algorithm. In an alphabet arithmetic task, item response times exhibited pronounced step-function decreases after moderate practice that were uniquely predicted by T. C. Rickard's (1997)…
Descriptors: Inferences, Thinking Skills, Test Items, Memory
Peer reviewed Peer reviewed
Direct linkDirect link
Pek, Peng-Kiat; Poh, Kim-Leng – Journal of Educational Computing Research, 2004
Newtonian mechanics is a core module in technology courses, but is difficult for many students to learn. Computerized tutoring can assist the teachers to provide individualized instruction. This article presents the application of decision theory to develop a tutoring system, "iTutor", to select optimal tutoring actions under uncertainty of…
Descriptors: Test Items, Educational Technology, Tutoring, Individualized Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wen-Chung; Su, Ya-Hui – Applied Measurement in Education, 2004
In this study we investigated the effects of the average signed area (ASA) between the item characteristic curves of the reference and focal groups and three test purification procedures on the uniform differential item functioning (DIF) detection via the Mantel-Haenszel (M-H) method through Monte Carlo simulations. The results showed that ASA,…
Descriptors: Test Bias, Student Evaluation, Evaluation Methods, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Zenisky, April L.; Hambleton, Ronald K.; Robin, Frederic – Educational Assessment, 2004
Differential item functioning (DIF) analyses are a routine part of the development of large-scale assessments. Less common are studies to understand the potential sources of DIF. The goals of this study were (a) to identify gender DIF in a large-scale science assessment and (b) to look for trends in the DIF and non-DIF items due to content,…
Descriptors: Program Effectiveness, Test Format, Science Tests, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Hoijtink, Herbert; Notenboom, Annelise – Psychometrika, 2004
There are two main theories with respect to the development of spelling ability: the stage model and the model of overlapping waves. In this paper exploratory model based clustering will be used to analyze the responses of more than 3500 pupils to subsets of 245 items. To evaluate the two theories, the resulting clusters will be ordered along a…
Descriptors: Spelling, Multivariate Analysis, Data Analysis, Skill Development
Peer reviewed Peer reviewed
Direct linkDirect link
Briggs, Derek C.; Alonzo, Alicia C.; Schwab, Cheryl; Wilson, Mark – Educational Assessment, 2006
In this article we describe the development, analysis, and interpretation of a novel item format we call Ordered Multiple-Choice (OMC). A unique feature of OMC items is that they are linked to a model of student cognitive development for the construct being measured. Each of the possible answer choices in an OMC item is linked to developmental…
Descriptors: Diagnostic Tests, Multiple Choice Tests, Cognitive Development, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Chase, Margaret E. – New Educator, 2006
This essay discusses the author's encounter with PRAXIS II test preparation questions that distort and misrepresent complex reading processes. Three sample test items are presented and discussed.
Descriptors: Test Items, Teacher Competency Testing, Reading Teachers, Test Coaching
Peer reviewed Peer reviewed
Direct linkDirect link
Hautau, Briana; Turner, Haley C.; Carroll, Erin; Jaspers, Kathryn; Krohn, Katy; Parker, Megan; Williams, Robert L. – Journal of Behavioral Education, 2006
Students (N=153) in three equivalent sections of an undergraduate human development course compared pairs of related concepts via either written or oral discussion at the beginning of most class sessions. A writing-for-random-credit section achieved significantly higher ratings on the writing activities than did a writing-for-no-credit section.…
Descriptors: Writing Exercises, Multiple Choice Tests, Undergraduate Study, Credits
Peer reviewed Peer reviewed
PDF on ERIC Download full text
National Center for Education Statistics, 2007
The purpose of this document is to provide background information that will be useful in interpreting the 2007 results from the Trends in International Mathematics and Science Study (TIMSS) by comparing its design, features, framework, and items with those of the U.S. National Assessment of Educational Progress and another international assessment…
Descriptors: National Competency Tests, Comparative Analysis, Achievement Tests, Test Items
Martinez, Martha I.; Ketterlin-Geller, Leanne; Tindal, Gerald – Behavioral Research and Teaching, 2007
Behavioral Research and Teaching (BRT) has developed a series of mathematics tests to assist local school districts in identifying students in grades 1-8 who may be at risk of not meeting year-end mathematics achievement goals. The tests were developed using the state mathematics standards for the relevant grade levels and administered to students…
Descriptors: Evidence, Feedback (Response), Test Results, Test Items
Pages: 1  |  ...  |  372  |  373  |  374  |  375  |  376  |  377  |  378  |  379  |  380  |  ...  |  637