NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 4,006 to 4,020 of 9,552 results Save | Export
Foy, Pierre, Ed.; Arora, Alka, Ed.; Stanco, Gabrielle M., Ed. – International Association for the Evaluation of Educational Achievement, 2013
This supplement describes national adaptations made to the international version of the TIMSS 2011 background questionnaires. This information provides users with a guide to evaluate the availability of internationally comparable data for use in secondary analyses involving the TIMSS 2011 background variables. Background questionnaire adaptations…
Descriptors: Questionnaires, Technology Transfer, Adoption (Ideas), Media Adaptation
Foy, Pierre, Ed.; Arora, Alka, Ed.; Stanco, Gabrielle M., Ed. – International Association for the Evaluation of Educational Achievement, 2013
This supplement contains documentation on all the derived variables contained in the TIMSS 2011 data files that are based on background questionnaire variables. These variables were used to report background data in the TIMSS 2011 International Results in Mathematics and TIMSS 2011 International Results in Science reports, and are made available…
Descriptors: Questionnaires, Guides, Guidelines, International Education
Peer reviewed Peer reviewed
Direct linkDirect link
Mroch, Andrew A.; Suh, Youngsuk; Kane, Michael T.; Ripkey, Douglas R. – Measurement: Interdisciplinary Research and Perspectives, 2009
This study uses the results of two previous papers (Kane, Mroch, Suh, & Ripkey, this issue; Suh, Mroch, Kane, & Ripkey, this issue) and the literature on linear equating to evaluate five linear equating methods along several dimensions, including the plausibility of their assumptions and their levels of bias and root mean squared difference…
Descriptors: Equated Scores, Methods, Test Items, Differences
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Won-Chan; Brennan, Robert L.; Wan, Lei – Applied Psychological Measurement, 2009
For a test that consists of dichotomously scored items, several approaches have been reported in the literature for estimating classification consistency and accuracy indices based on a single administration of a test. Classification consistency and accuracy have not been studied much, however, for "complex" assessments--for example,…
Descriptors: Classification, Reliability, Test Items, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Dolan, Conor V.; Oort, Frans J.; Stoel, Reinoud D.; Wicherts, Jelte M. – Structural Equation Modeling: A Multidisciplinary Journal, 2009
We propose a method to investigate measurement invariance in the multigroup exploratory factor model, subject to target rotation. We consider both oblique and orthogonal target rotation. This method has clear advantages over other approaches, such as the use of congruence measures. We demonstrate that the model can be implemented readily in the…
Descriptors: Test Items, Psychology, Models, College Students
Peer reviewed Peer reviewed
Direct linkDirect link
Miller, G. Edward; Fitzpatrick, Steven J. – Educational and Psychological Measurement, 2009
Incorrect handling of item parameter drift during the equating process can result in equating error. If the item parameter drift is due to construct-irrelevant factors, then inclusion of these items in the estimation of the equating constants can be expected to result in equating error. On the other hand, if the item parameter drift is related to…
Descriptors: Equated Scores, Computation, Item Response Theory, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Penfield, Randall D. – Journal of Educational Measurement, 2008
Investigations of differential distractor functioning (DDF) can provide valuable information concerning the location and possible causes of measurement invariance within a multiple-choice item. In this article, I propose an odds ratio estimator of the DDF effect as modeled under the nominal response model. In addition, I propose a simultaneous…
Descriptors: Test Items, Investigations, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
DiStefano, Christine; Greer, Fred W.; Kamphaus, R. W.; Brown, William H. – Journal of Early Intervention, 2014
A screening instrument used to identify young children at risk for behavioral and emotional difficulties, the Behavioral and Emotional Screening System Teacher Rating Scale-Preschool was examined. The Rasch Rating Scale Method was used to provide additional information about psychometric properties of items, respondents, and the response scale.…
Descriptors: Screening Tests, At Risk Persons, Test Validity, Rating Scales
Peer reviewed Peer reviewed
Direct linkDirect link
Sockalingam, Nachamma; Schmidt, Henk G. – Interdisciplinary Journal of Problem-based Learning, 2011
This study aimed to identify salient problem characteristics perceived by students in problem-based curricula. To this end, reflective essays from biomedical students (N = 34) on characteristics of good problems were text-analyzed. Students identified eleven characteristics, of which they found the extent to which the problem leads to desired…
Descriptors: Problem Based Learning, Student Attitudes, Essays, Biological Sciences
Pearson Education, Inc., 2011
With the June 2, 2010, release of the Common Core State Standards, state-led education standards developed for K-12 English Language Arts and Mathematics, Pearson Learning Assessments and content experts conducted an in-depth study to analyze how the "Stanford 10 Achievement Test Series," Tenth Edition (Stanford 10) and Stanford 10…
Descriptors: Achievement Tests, Standardized Tests, Common Core State Standards, Alignment (Education)
Kaliski, Pamela; France, Megan; Huff, Kristen; Thurber, Allison – College Board, 2011
Developing a cognitive model of task performance is an important and often overlooked phase in assessment design; failing to establish such a model can threaten the validity of the inferences made from the scores produced by an assessment (e.g., Leighton, 2004). Conducting think aloud interviews (TAIs), where students think aloud while completing…
Descriptors: World History, Advanced Placement Programs, Achievement Tests, Protocol Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Wilson, Ashlea; Kavanaugh, Abi; Moher, Rosemarie; McInroy, Megan; Gupta, Neena; Salbach, Nancy M.; Wright, F. Virginia – Physical & Occupational Therapy in Pediatrics, 2011
The aim was to develop a Challenge Module (CM) as a proposed adjunct to the Gross Motor Function Measure for children with cerebral palsy who have high-level motor function. Items were generated in a physiotherapist (PT) focus group. Item reduction was based on PTs' ratings of item importance and safety via online surveys. The proposed CM items…
Descriptors: Children, Cerebral Palsy, Measures (Individuals), Psychomotor Skills
Peer reviewed Peer reviewed
Direct linkDirect link
Cawthon, Stephanie – American Annals of the Deaf, 2011
Linguistic complexity of test items is one test format element that has been studied in the context of struggling readers and their participation in paper-and-pencil tests. The present article presents findings from an exploratory study on the potential relationship between linguistic complexity and test performance for deaf readers. A total of 64…
Descriptors: Language Styles, Test Content, Syntax, Linguistics
Peer reviewed Peer reviewed
Direct linkDirect link
Elbaum, Batya; Fisher, William P., Jr.; Coulter, W. Alan – Journal of Applied Measurement, 2011
Indicator 8 of the State Performance Plan (SPP), developed under the 2004 reauthorization of the Individuals with Disabilities Education Act (IDEA 2004, Public Law 108-446) requires states to collect data and report findings related to schools' facilitation of parent involvement. The Schools' Efforts to Partner with Parents Scale (SEPPS) was…
Descriptors: Disabilities, Accountability, Stakeholders, Scaling
Peer reviewed Peer reviewed
Direct linkDirect link
Atar, Burcu; Kamata, Akihito – Hacettepe University Journal of Education, 2011
The Type I error rates and the power of IRT likelihood ratio test and cumulative logit ordinal logistic regression procedures in detecting differential item functioning (DIF) for polytomously scored items were investigated in this Monte Carlo simulation study. For this purpose, 54 simulation conditions (combinations of 3 sample sizes, 2 sample…
Descriptors: Test Bias, Sample Size, Monte Carlo Methods, Item Response Theory
Pages: 1  |  ...  |  264  |  265  |  266  |  267  |  268  |  269  |  270  |  271  |  272  |  ...  |  637