NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 4,636 to 4,650 of 9,552 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Maij-de Meij, Annette M.; Kelderman, Henk; van der Flier, Henk – Applied Psychological Measurement, 2008
Mixture item response theory (IRT) models aid the interpretation of response behavior on personality tests and may provide possibilities for improving prediction. Heterogeneity in the population is modeled by identifying homogeneous subgroups that conform to different measurement models. In this study, mixture IRT models were applied to the…
Descriptors: Test Items, Social Desirability, Form Classes (Languages), Prediction
Peer reviewed Peer reviewed
Direct linkDirect link
Yi, Qing; Zhang, Jinming; Chang, Hua-Hua – Applied Psychological Measurement, 2008
Criteria had been proposed for assessing the severity of possible test security violations for computerized tests with high-stakes outcomes. However, these criteria resulted from theoretical derivations that assumed uniformly randomized item selection. This study investigated potential damage caused by organized item theft in computerized adaptive…
Descriptors: Test Items, Simulation, Item Analysis, Safety
Peer reviewed Peer reviewed
Direct linkDirect link
Vock, Miriam; Holling, Heinz – Intelligence, 2008
The objective of this study is to explore the potential for developing IRT-based working memory scales for assessing specific working memory components in children (8-13 years). These working memory scales should measure cognitive abilities reliably in the upper range of ability distribution as well as in the normal range, and provide a…
Descriptors: Test Items, Academic Achievement, Factor Structure, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Kathryn S.; Osborne, Randall E.; Hayes, Keith A.; Simoes, Richard A. – Journal of Educational Computing Research, 2008
Minimal research has been conducted contrasting the effectiveness of various testing accommodations for college students diagnosed with ADHD. The current assumption is that these students are best served by extending the time they have to take a test. It is the supposition of these investigators that paced item presentation may be a more…
Descriptors: College Students, Testing Accommodations, Student Attitudes, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Vigneau, Francois; Bors, Douglas A. – Intelligence, 2008
Various taxonomies of Raven's Advanced Progressive Matrices (APM) items have been proposed in the literature to account for performance on the test. In the present article, three such taxonomies based on information processing, namely Carpenter, Just and Shell's [Carpenter, P.A., Just, M.A., & Shell, P., (1990). What one intelligence test…
Descriptors: Intelligence, Intelligence Tests, Factor Analysis, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
El-Alfy, El-Sayed M.; Abdel-Aal, Radwan E. – Computers & Education, 2008
Recent advances in educational technologies and the wide-spread use of computers in schools have fueled innovations in test construction and analysis. As the measurement accuracy of a test depends on the quality of the items it includes, item selection procedures play a central role in this process. Mathematical programming and the item response…
Descriptors: Test Items, Item Analysis, Educational Technology, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Cohen, Jon; Chan, Tsze; Jiang, Tao; Seburn, Mary – Applied Psychological Measurement, 2008
U.S. state educational testing programs administer tests to track student progress and hold schools accountable for educational outcomes. Methods from item response theory, especially Rasch models, are usually used to equate different forms of a test. The most popular method for estimating Rasch models yields inconsistent estimates and relies on…
Descriptors: Testing Programs, Educational Testing, Item Response Theory, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Henson, Robert; Roussos, Louis; Douglas, Jeff; He, Xuming – Applied Psychological Measurement, 2008
Cognitive diagnostic models (CDMs) model the probability of correctly answering an item as a function of an examinee's attribute mastery pattern. Because estimation of the mastery pattern involves more than a continuous measure of ability, reliability concepts introduced by classical test theory and item response theory do not apply. The cognitive…
Descriptors: Diagnostic Tests, Classification, Probability, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Lamprianou, Iasonas – International Journal of Testing, 2008
This study investigates the effect of reporting the unadjusted raw scores in a high-stakes language exam when raters differ significantly in severity and self-selected questions differ significantly in difficulty. More sophisticated models, introducing meaningful facets and parameters, are successively used to investigate the characteristics of…
Descriptors: High Stakes Tests, Raw Scores, Item Response Theory, Language Tests
Hess, Karin K.; Jones, Ben S.; Carlock, Dennis; Walkup, John R. – Online Submission, 2009
To teach the rigorous skills and knowledge students need to succeed in future college-entry courses and workforce training programs, education stakeholders have increasingly called for more rigorous curricula, instruction, and assessments. Identifying the critical attributes of rigor and measuring its appearance in curricular materials is…
Descriptors: Educational Objectives, Classification, Matrices, Curriculum Development
Peer reviewed Peer reviewed
Direct linkDirect link
Costagliola, Gennaro; Fuccella, Vittorio – International Journal of Distance Education Technologies, 2009
To correctly evaluate learners' knowledge, it is important to administer tests composed of good quality question items. By the term "quality" we intend the potential of an item in effectively discriminating between skilled and untrained students and in obtaining tutor's desired difficulty level. This article presents a rule-based e-testing system…
Descriptors: Difficulty Level, Test Items, Computer Assisted Testing, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Sadler, Troy D.; Zeidler, Dana L. – Journal of Research in Science Teaching, 2009
In this article, we explore the Programme for International Student Assessment (PISA) with a lens informed by the socioscientific issues (SSI) movement. We consider the PISA definition of scientific literacy and how it is situated with respect to broader discussions of the aims of science education. We also present an overview of the SSI framework…
Descriptors: Test Items, Scientific Literacy, Science Education, Science Process Skills
Peer reviewed Peer reviewed
Direct linkDirect link
Kuntsche, Emmanuel; Kuntsche, Sandra – Journal of Clinical Child and Adolescent Psychology, 2009
A short form of the Drinking Motive Questionnaire Revised (DMQ-R; Cooper, 1994) was developed, using different item selection strategies based on a national representative sample of 5,617 12- to 18-year-old students in Switzerland. To confirm the concurrent validity of the short-form questionnaire, or DMQ-R SF, data from a second national sample…
Descriptors: Structural Equation Models, International Studies, Test Validity, Drinking
Peer reviewed Peer reviewed
Direct linkDirect link
Goll, Paulette S. – Education, 2009
"Gift of Tongues: Passing the Ohio Mathematics Graduation Test" examines the Ohio Graduation Mathematics Tests of 2004, 2005, and 2006 in the context of bilingual test takers at one of Cleveland's high schools and reports findings from a promising, bilingual pilot project in 2007 that may point to a new strategy for passing the…
Descriptors: Mathematics Tests, Exit Examinations, High Stakes Tests, Bilingualism
Peer reviewed Peer reviewed
Direct linkDirect link
Abedi, Jamal – Educational Assessment, 2009
This study compared performance of both English language learners (ELLs) and non-ELL students in Grades 4 and 8 under accommodated and nonaccommodated testing conditions. The accommodations used in this study included a computerized administration of a math test with a pop-up glossary, a customized English dictionary, extra testing time, and…
Descriptors: Computer Assisted Testing, Testing Accommodations, Mathematics Tests, Grade 4
Pages: 1  |  ...  |  306  |  307  |  308  |  309  |  310  |  311  |  312  |  313  |  314  |  ...  |  637