NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20260
Since 20250
Since 2022 (last 5 years)0
Since 2017 (last 10 years)2
Since 2007 (last 20 years)10
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 14 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Stylianou-Georgiou, Agni; Papanastasiou, Elena C. – Educational Research and Evaluation, 2017
The purpose of our study was to examine the issue of answer changing in relation to students' abilities to monitor their behaviour accurately while responding to multiple-choice tests. The data for this study were obtained from the final examination administered to students in an educational psychology course. The results of the study indicate…
Descriptors: Role, Metacognition, Testing, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Ojerinde, Dibu; Popoola, Omokunmi; Onyeneho, Patrick; Egberongbe, Aminat – Perspectives in Education, 2016
Statistical procedure used in adjusting test score difficulties on test forms is known as "equating". Equating makes it possible for various test forms to be used interchangeably. In terms of where the equating method fits in the assessment cycle, there are pre-equating and post-equating methods. The major benefits of pre-equating, when…
Descriptors: Measurement, Comparative Analysis, High Stakes Tests, Pretests Posttests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bendulo, Hermabeth O.; Tibus, Erlinda D.; Bande, Rhodora A.; Oyzon, Voltaire Q.; Milla, Norberto E.; Macalinao, Myrna L. – International Journal of Evaluation and Research in Education, 2017
Testing or evaluation in an educational context is primarily used to measure or evaluate and authenticate the academic readiness, learning advancement, acquisition of skills, or instructional needs of learners. This study tried to determine whether the varied combinations of arrangements of options and letter cases in a Multiple-Choice Test (MCT)…
Descriptors: Test Format, Multiple Choice Tests, Test Construction, Eye Movements
Peer reviewed Peer reviewed
Direct linkDirect link
DiBattista, David; Sinnige-Egger, Jo-Anne; Fortuna, Glenda – Journal of Experimental Education, 2014
The authors assessed the effects of using "none of the above" as an option in a 40-item, general-knowledge multiple-choice test administered to undergraduate students. Examinees who selected "none of the above" were given an incentive to write the correct answer to the question posed. Using "none of the above" as the…
Descriptors: Multiple Choice Tests, Testing, Undergraduate Students, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Warne, Russell T.; Doty, Kristine J.; Malbica, Anne Marie; Angeles, Victor R.; Innes, Scott; Hall, Jared; Masterson-Nixon, Kelli – Journal of Psychoeducational Assessment, 2016
"Above-level testing" (also called "above-grade testing," "out-of-level testing," and "off-level testing") is the practice of administering to a child a test that is designed for an examinee population that is older or in a more advanced grade. Above-level testing is frequently used to help educators design…
Descriptors: Test Items, Testing, Academically Gifted, Talent Identification
Peer reviewed Peer reviewed
Direct linkDirect link
Hasson, Natalie; Dodd, Barbara; Botting, Nicola – International Journal of Language & Communication Disorders, 2012
Background: Sentence construction and syntactic organization are known to be poor in children with specific language impairments (SLI), but little is known about the way in which children with SLI approach language tasks, and static standardized tests contribute little to the differentiation of skills within the population of children with…
Descriptors: Alternative Assessment, Sentence Structure, Syntax, Language Processing
Maynard, Jennifer Leigh – ProQuest LLC, 2012
Emphasis on regular mathematics skill assessment, intervention, and progress monitoring under the RTI model has created a need for the development of assessment instruments that are psychometrically sound, reliable, universal, and brief. Important factors to consider when developing or selecting assessments for the school environment include what…
Descriptors: Response to Intervention, Mathematics Skills, Student Evaluation, Progress Monitoring
Peer reviewed Peer reviewed
Direct linkDirect link
Aryadoust, Vahid – International Journal of Listening, 2012
This article investigates a version of the International English Language Testing System (IELTS) listening test for evidence of differential item functioning (DIF) based on gender, nationality, age, and degree of previous exposure to the test. Overall, the listening construct was found to be underrepresented, which is probably an important cause…
Descriptors: Evidence, Test Bias, Testing, Listening Comprehension Tests
Nering, Michael L., Ed.; Ostini, Remo, Ed. – Routledge, Taylor & Francis Group, 2010
This comprehensive "Handbook" focuses on the most used polytomous item response theory (IRT) models. These models help us understand the interaction between examinees and test questions where the questions have various response categories. The book reviews all of the major models and includes discussions about how and where the models…
Descriptors: Guides, Item Response Theory, Test Items, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wen-Chung; Chen, Po-Hsi; Cheng, Ying-Yao – Psychological Methods, 2004
A conventional way to analyze item responses in multiple tests is to apply unidimensional item response models separately, one test at a time. This unidimensional approach, which ignores the correlations between latent traits, yields imprecise measures when tests are short. To resolve this problem, one can use multidimensional item response models…
Descriptors: Item Response Theory, Test Items, Testing, Test Validity
Angoff, William H. – 1985
This paper points out that there are certain generalizations about directions for guessing and methods of scoring that require that data be derived from random groups design. It supports the viewpoint that it is neither sufficient nor appropriate to make such generalizations on the basis of an analysis of scores obtained from the answer sheets of…
Descriptors: Correlation, Guessing (Tests), Research Design, Scoring Formulas
Peer reviewed Peer reviewed
Direct linkDirect link
Mareschal, Denis; Powell, Daisy; Westermann, Gert; Volein, Agnes – Infant and Child Development, 2005
Young infants are very sensitive to feature distribution information in the environment. However, existing work suggests that they do not make use of correlation information to form certain perceptual categories until at least 7 months of age. We suggest that the failure to use correlation information is a by-product of familiarization procedures…
Descriptors: Infants, Classification, Correlation, Familiarity
Peer reviewed Peer reviewed
Direct linkDirect link
Lievens, Filip; Sackett, Paul R. – Journal of Applied Psychology, 2007
This study used principles underlying item generation theory to posit competing perspectives about which features of situational judgment tests might enhance or impede consistent measurement across repeat test administrations. This led to 3 alternate-form development approaches (random assignment, incident isomorphism, and item isomorphism). The…
Descriptors: Validity, High Stakes Tests, Test Construction, Testing
Rippey, Robert M. – 1971
Technical improvements, which may be made in the reliability and validity of tests through confidence scores, are discussed. However, studies indicate that subjects do not handle their confidence uniformly. (MS)
Descriptors: Computer Programs, Confidence Testing, Correlation, Difficulty Level