NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Individuals with Disabilities…1
What Works Clearinghouse Rating
Showing 1 to 15 of 62 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Basman, Munevver – International Journal of Assessment Tools in Education, 2023
To ensure the validity of the tests is to check that all items have similar results across different groups of individuals. However, differential item functioning (DIF) occurs when the results of individuals with equal ability levels from different groups differ from each other on the same test item. Based on Item Response Theory and Classic Test…
Descriptors: Test Bias, Test Items, Test Validity, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Dombrowski, Stefan C.; McGill, Ryan J.; Canivez, Gary L.; Watkins, Marley W.; Beaujean, A. Alexander – Journal of Psychoeducational Assessment, 2021
This article addresses conceptual and methodological shortcomings regarding conducting and interpreting intelligence test factor analytic research that appeared in the Decker, S. L., Bridges, R. M., Luedke, J. C., & Eason, M. J. (2020). Dimensional evaluation of cognitive measures: Methodological confounds and theoretical concerns.…
Descriptors: Factor Analysis, Intelligence Tests, Psychoeducational Methods, Error Patterns
Peer reviewed Peer reviewed
Direct linkDirect link
Barnes, M. Elizabeth; Misheva, Taya; Supriya, K.; Rutledge, Michael; Brownell, Sara E. – CBE - Life Sciences Education, 2022
Hundreds of articles have explored the extent to which individuals accept evolution, and the Measure of Acceptance of the Theory of Evolution (MATE) is the most often used survey. However, research indicates the MATE has limitations, and it has not been updated since its creation more than 20 years ago. In this study, we revised the MATE using…
Descriptors: Evolution, Measures (Individuals), Knowledge Level, Scientific Principles
Peer reviewed Peer reviewed
Direct linkDirect link
Sirnoorkar, Amogh; Mazumdar, Anwesh; Kumar, Arvind – Physical Review Physics Education Research, 2020
We elaborate on a new approach of assessing content-based epistemic clarity of college physics students in terms of their ability to discriminate between different epistemic warrants for propositions in a chained argument in physics. A threefold classification (nominal, physical, and mathematical) of warrants is used, with each class split into a…
Descriptors: Physics, Science Instruction, College Science, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Sorenson, Rachel A. – Update: Applications of Research in Music Education, 2021
The ability to accurately detect performance errors is a fundamental skill for music educators and has been a popular topic of research within the field of music education. In fact, it has been suggested that roughly half of all ensemble rehearsals are dedicated to error detection. The purpose of this literature review was to synthesize the…
Descriptors: Music Education, Music Teachers, Error Patterns, Teaching Skills
Peer reviewed Peer reviewed
Direct linkDirect link
Rouweler, Liset; Varkevisser, Nelleke; Brysbaert, Marc; Maassen, Ben; Tops, Wim – European Journal of Special Needs Education, 2020
In this study, we present a new diagnostic test for dyslexia, called the Flamingo Test, inspired by the French Alouette Test. The purpose of the test is to measure students' word decoding skills and reading fluency by means of a grammatically correct but meaningless text. Two experiments were run to test the predictive validity of the Flamingo…
Descriptors: Foreign Countries, College Students, Dyslexia, Decoding (Reading)
Adrea J. Truckenmiller; Eunsoo Cho; Gary A. Troia – Grantee Submission, 2022
Although educators frequently use assessment to identify who needs supplemental instruction and if that instruction is working, there is a lack of research investigating assessment that informs what instruction students need. The purpose of the current study was to determine if a brief (approximately 20 min) task that reflects a common middle…
Descriptors: Middle School Teachers, Middle School Students, Test Validity, Writing (Composition)
Peer reviewed Peer reviewed
Direct linkDirect link
Castles, Anne; Polito, Vince; Pritchard, Stephen; Anandakumar, Thushara; Coltheart, Max – Australian Journal of Learning Difficulties, 2018
Nonword reading measures are widely used to index children's phonics knowledge, and are included in the Phonics Screening Check currently implemented in England and under consideration in Australia. However, critics have argued that the use of nonword measures disadvantages good readers, as they will be influenced by their strong lexical knowledge…
Descriptors: Reading Tests, Phonics, Error Patterns, Elementary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Bowen; Kennedy, Patrick C.; Seipel, Ben; Carlson, Sarah E.; Biancarosa, Gina; Davison, Mark L. – Journal of Educational Measurement, 2019
This article describes an ongoing project to develop a formative, inferential reading comprehension assessment of causal story comprehension. It has three features to enhance classroom use: equated scale scores for progress monitoring within and across grades, a scale score to distinguish among low-scoring students based on patterns of mistakes,…
Descriptors: Formative Evaluation, Reading Comprehension, Story Reading, Test Construction
Liu, Bowen; Kennedy, Patrick C.; Seipel, Ben; Carlson, Sarah E.; Biancarosa, Gina; Davison, Mark L. – Grantee Submission, 2019
This paper describes an on-going project to develop a formative, inferential reading comprehension assessment of causal story comprehension. It has three features to enhance classroom use: equated scale scores for progress monitoring within and across grades, a scale score to distinguish among low-scoring students based on patterns of mistakes,…
Descriptors: Formative Evaluation, Reading Comprehension, Story Reading, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Tavares, Walter; Brydges, Ryan; Myre, Paul; Prpic, Jason; Turner, Linda; Yelle, Richard; Huiskamp, Maud – Advances in Health Sciences Education, 2018
Assessment of clinical competence is complex and inference based. Trustworthy and defensible assessment processes must have favourable evidence of validity, particularly where decisions are considered high stakes. We aimed to organize, collect and interpret validity evidence for a high stakes simulation based assessment strategy for certifying…
Descriptors: Competence, Simulation, Allied Health Personnel, Certification
Peer reviewed Peer reviewed
Direct linkDirect link
Sato, Takanori; McNamara, Tim – Applied Linguistics, 2019
Applied linguists have developed complex theories of the ability to communicate in a second language (L2). However, the perspectives on L2 communication ability of speakers who are not trained language professionals have been incorporated neither into theories of communication ability nor in the criteria for assessing performance on…
Descriptors: Second Language Learning, Oral Language, Applied Linguistics, Linguistic Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Ceuppens, Stijn; Deprez, Johan; Dehaene, Wim; De Cock, Mieke – Physical Review Physics Education Research, 2018
This study reports on the development, validation, and administration of a 48-item multiple-choice test to assess students' representational fluency of linear functions in a physics context (1D kinematics) and a mathematics context. The test includes three external representations: graphs, tables, and formulas, which result in six possible…
Descriptors: Secondary School Students, Mathematics Tests, Test Construction, Foreign Countries
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kogar, Esin Yilmaz; Kelecioglu, Hülya – Journal of Education and Learning, 2017
The purpose of this research is to first estimate the item and ability parameters and the standard error values related to those parameters obtained from Unidimensional Item Response Theory (UIRT), bifactor (BIF) and Testlet Response Theory models (TRT) in the tests including testlets, when the number of testlets, number of independent items, and…
Descriptors: Item Response Theory, Models, Mathematics Tests, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Mao, Liyang; Liu, Ou Lydia; Roohr, Katrina; Belur, Vinetha; Mulholland, Matthew; Lee, Hee-Sun; Pallant, Amy – Educational Assessment, 2018
Scientific argumentation is one of the core practices for teachers to implement in science classrooms. We developed a computer-based formative assessment to support students' construction and revision of scientific arguments. The assessment is built upon automated scoring of students' arguments and provides feedback to students and teachers.…
Descriptors: Computer Assisted Testing, Science Tests, Scoring, Automation
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5