NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 3,991 to 4,005 of 9,533 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Dolan, Conor V.; Oort, Frans J.; Stoel, Reinoud D.; Wicherts, Jelte M. – Structural Equation Modeling: A Multidisciplinary Journal, 2009
We propose a method to investigate measurement invariance in the multigroup exploratory factor model, subject to target rotation. We consider both oblique and orthogonal target rotation. This method has clear advantages over other approaches, such as the use of congruence measures. We demonstrate that the model can be implemented readily in the…
Descriptors: Test Items, Psychology, Models, College Students
Peer reviewed Peer reviewed
Direct linkDirect link
Miller, G. Edward; Fitzpatrick, Steven J. – Educational and Psychological Measurement, 2009
Incorrect handling of item parameter drift during the equating process can result in equating error. If the item parameter drift is due to construct-irrelevant factors, then inclusion of these items in the estimation of the equating constants can be expected to result in equating error. On the other hand, if the item parameter drift is related to…
Descriptors: Equated Scores, Computation, Item Response Theory, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Penfield, Randall D. – Journal of Educational Measurement, 2008
Investigations of differential distractor functioning (DDF) can provide valuable information concerning the location and possible causes of measurement invariance within a multiple-choice item. In this article, I propose an odds ratio estimator of the DDF effect as modeled under the nominal response model. In addition, I propose a simultaneous…
Descriptors: Test Items, Investigations, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
DiStefano, Christine; Greer, Fred W.; Kamphaus, R. W.; Brown, William H. – Journal of Early Intervention, 2014
A screening instrument used to identify young children at risk for behavioral and emotional difficulties, the Behavioral and Emotional Screening System Teacher Rating Scale-Preschool was examined. The Rasch Rating Scale Method was used to provide additional information about psychometric properties of items, respondents, and the response scale.…
Descriptors: Screening Tests, At Risk Persons, Test Validity, Rating Scales
Peer reviewed Peer reviewed
Direct linkDirect link
Sockalingam, Nachamma; Schmidt, Henk G. – Interdisciplinary Journal of Problem-based Learning, 2011
This study aimed to identify salient problem characteristics perceived by students in problem-based curricula. To this end, reflective essays from biomedical students (N = 34) on characteristics of good problems were text-analyzed. Students identified eleven characteristics, of which they found the extent to which the problem leads to desired…
Descriptors: Problem Based Learning, Student Attitudes, Essays, Biological Sciences
Pearson Education, Inc., 2011
With the June 2, 2010, release of the Common Core State Standards, state-led education standards developed for K-12 English Language Arts and Mathematics, Pearson Learning Assessments and content experts conducted an in-depth study to analyze how the "Stanford 10 Achievement Test Series," Tenth Edition (Stanford 10) and Stanford 10…
Descriptors: Achievement Tests, Standardized Tests, Common Core State Standards, Alignment (Education)
Kaliski, Pamela; France, Megan; Huff, Kristen; Thurber, Allison – College Board, 2011
Developing a cognitive model of task performance is an important and often overlooked phase in assessment design; failing to establish such a model can threaten the validity of the inferences made from the scores produced by an assessment (e.g., Leighton, 2004). Conducting think aloud interviews (TAIs), where students think aloud while completing…
Descriptors: World History, Advanced Placement Programs, Achievement Tests, Protocol Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Wilson, Ashlea; Kavanaugh, Abi; Moher, Rosemarie; McInroy, Megan; Gupta, Neena; Salbach, Nancy M.; Wright, F. Virginia – Physical & Occupational Therapy in Pediatrics, 2011
The aim was to develop a Challenge Module (CM) as a proposed adjunct to the Gross Motor Function Measure for children with cerebral palsy who have high-level motor function. Items were generated in a physiotherapist (PT) focus group. Item reduction was based on PTs' ratings of item importance and safety via online surveys. The proposed CM items…
Descriptors: Children, Cerebral Palsy, Measures (Individuals), Psychomotor Skills
Peer reviewed Peer reviewed
Direct linkDirect link
Cawthon, Stephanie – American Annals of the Deaf, 2011
Linguistic complexity of test items is one test format element that has been studied in the context of struggling readers and their participation in paper-and-pencil tests. The present article presents findings from an exploratory study on the potential relationship between linguistic complexity and test performance for deaf readers. A total of 64…
Descriptors: Language Styles, Test Content, Syntax, Linguistics
Peer reviewed Peer reviewed
Direct linkDirect link
Elbaum, Batya; Fisher, William P., Jr.; Coulter, W. Alan – Journal of Applied Measurement, 2011
Indicator 8 of the State Performance Plan (SPP), developed under the 2004 reauthorization of the Individuals with Disabilities Education Act (IDEA 2004, Public Law 108-446) requires states to collect data and report findings related to schools' facilitation of parent involvement. The Schools' Efforts to Partner with Parents Scale (SEPPS) was…
Descriptors: Disabilities, Accountability, Stakeholders, Scaling
Peer reviewed Peer reviewed
Direct linkDirect link
Atar, Burcu; Kamata, Akihito – Hacettepe University Journal of Education, 2011
The Type I error rates and the power of IRT likelihood ratio test and cumulative logit ordinal logistic regression procedures in detecting differential item functioning (DIF) for polytomously scored items were investigated in this Monte Carlo simulation study. For this purpose, 54 simulation conditions (combinations of 3 sample sizes, 2 sample…
Descriptors: Test Bias, Sample Size, Monte Carlo Methods, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Kibble, Jonathan D.; Johnson, Teresa – Advances in Physiology Education, 2011
The purpose of this study was to evaluate whether multiple-choice item difficulty could be predicted either by a subjective judgment by the question author or by applying a learning taxonomy to the items. Eight physiology faculty members teaching an upper-level undergraduate human physiology course consented to participate in the study. The…
Descriptors: Test Items, Hidden Curriculum, Reliability, Physiology
Peer reviewed Peer reviewed
Direct linkDirect link
Dunlosky, John; Ariel, Robert – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2011
Research on study-time allocation has largely focused on agenda-based regulation, such as whether learners select items for study that are in their region of proximal learning. In 4 experiments, the authors evaluated the contribution of habitual responding to study-time allocation (e.g., reading from left to right). In Experiments 1 and 2,…
Descriptors: Time Management, Item Analysis, Study Habits, Educational Experiments
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gutl, Christian; Lankmayr, Klaus; Weinhofer, Joachim; Hofler, Margit – Electronic Journal of e-Learning, 2011
Research in automated creation of test items for assessment purposes became increasingly important during the recent years. Due to automatic question creation it is possible to support personalized and self-directed learning activities by preparing appropriate and individualized test items quite easily with relatively little effort or even fully…
Descriptors: Test Items, Semantics, Multilingualism, Language Processing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sahin, Ismail – Turkish Online Journal of Educational Technology - TOJET, 2011
The purpose of this study is to develop a survey of technological pedagogical and content knowledge (TPACK). The survey consists of seven subscales forming the TPACK model: 1) technology knowledge (TK), 2) pedagogy knowledge (PK), 3) content knowledge (CK), 4) technological pedagogical knowledge (TPK), 5) technological content knowledge (TCK), 6)…
Descriptors: Preservice Teachers, Test Validity, Pedagogical Content Knowledge, Surveys
Pages: 1  |  ...  |  263  |  264  |  265  |  266  |  267  |  268  |  269  |  270  |  271  |  ...  |  636