NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 33 results Save | Export
Sherwin E. Balbuena – Online Submission, 2024
This study introduces a new chi-square test statistic for testing the equality of response frequencies among distracters in multiple-choice tests. The formula uses the information from the number of correct answers and wrong answers, which becomes the basis of calculating the expected values of response frequencies per distracter. The method was…
Descriptors: Multiple Choice Tests, Statistics, Test Validity, Testing
Zubanova, Svetlana; Bodrova, Tatyana; Kruchkovich, Sofia – Journal of Educational Psychology - Propositos y Representaciones, 2020
Testing is a modern high-quality method of knowledge check. Informatization which began in the late XX-early XXI century contributed to the growth of various tests. However, the inclusion of tests in the educational process is at a slower pace. This is largely due to the lack of a methodological basis for test development. It is proved that the…
Descriptors: Testing, Educational Quality, Educational Indicators, Test Construction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Coutinho, Mariana V. C.; Papanastasiou, Elena; Agni, Stylianou; Vasko, John M.; Couchman, Justin J. – International Journal of Instruction, 2020
In this study, we examined monitoring accuracy during in class-exams for Emirati, American and Cypriot college students. In experiment 1, 120 students made local, confidence-ratings for each multiple-choice question in a psychology exam and also estimated their performance at the end of the exam. In experiment 2, to investigate the effect of…
Descriptors: Metacognition, Foreign Countries, Cultural Differences, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
El Rahman, Sahar Abd; Zolait, Ali Hussein – International Journal of Web-Based Learning and Teaching Technologies, 2019
This article describes how with the advent of computer-based technology, there is movement from manual to automated systems for different aspects of the education system. Testing is an essential part of teaching process that helps academics in classifying the level of students and evaluating the outcomes of their teaching process. The testing…
Descriptors: Test Items, Computer Uses in Education, Computers, Web Based Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Bramley, Tom; Crisp, Victoria – Assessment in Education: Principles, Policy & Practice, 2019
For many years, question choice has been used in some UK public examinations, with students free to choose which questions they answer from a selection (within certain parameters). There has been little published research on choice of exam questions in recent years in the UK. In this article we distinguish different scenarios in which choice…
Descriptors: Test Items, Test Construction, Difficulty Level, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Lindner, Marlit A.; Schult, Johannes; Mayer, Richard E. – Journal of Educational Psychology, 2022
This classroom experiment investigates the effects of adding representational pictures to multiple-choice and constructed-response test items to understand the role of the response format for the multimedia effect in testing. Participants were 575 fifth- and sixth-graders who answered 28 science test items--seven items in each of four experimental…
Descriptors: Elementary School Students, Grade 5, Grade 6, Multimedia Materials
Peer reviewed Peer reviewed
Direct linkDirect link
Veerbeek, Jochanan; Vogelaar, Bart; Resing, Wilma C. M. – Journal of Cognitive Education and Psychology, 2019
Process-oriented dynamic testing aims to investigate the processes children use to solve cognitive tasks, and evaluate changes in these processes as a result of training. For the current study, a dynamic complex figure task was constructed, using the graduated prompts approach, to investigate the processes involved in solving a complex figure task…
Descriptors: Cognitive Processes, Testing, Cognitive Tests, Problem Solving
Peer reviewed Peer reviewed
Direct linkDirect link
Cho, Yeonsuk; Blood, Ian A. – Language Testing, 2020
In this study, we examined how much change in "TOEFL® Primary™" listening and reading scores can be expected in relation to the time interval between test administrations. The test records of 5213 young learners of English (aged 8-13 years) in Japan and Turkey who repeated the tests were analyzed to examine test scores as a function of…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Ceuppens, Stijn; Deprez, Johan; Dehaene, Wim; De Cock, Mieke – Physical Review Physics Education Research, 2018
This study reports on the development, validation, and administration of a 48-item multiple-choice test to assess students' representational fluency of linear functions in a physics context (1D kinematics) and a mathematics context. The test includes three external representations: graphs, tables, and formulas, which result in six possible…
Descriptors: Secondary School Students, Mathematics Tests, Test Construction, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Ojerinde, Dibu; Popoola, Omokunmi; Onyeneho, Patrick; Egberongbe, Aminat – Perspectives in Education, 2016
Statistical procedure used in adjusting test score difficulties on test forms is known as "equating". Equating makes it possible for various test forms to be used interchangeably. In terms of where the equating method fits in the assessment cycle, there are pre-equating and post-equating methods. The major benefits of pre-equating, when…
Descriptors: Measurement, Comparative Analysis, High Stakes Tests, Pretests Posttests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Volov, Vyacheslav T.; Gilev, Alexander A. – International Journal of Environmental and Science Education, 2016
In today's item response theory (IRT) the response to the test item is considered as a probability event depending on the student's ability and difficulty of items. It is noted that in the scientific literature there is very little agreement about how to determine factors affecting the item difficulty. It is suggested that the difficulty of the…
Descriptors: Item Response Theory, Test Items, Difficulty Level, Science Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Cormier, Damien C.; Wang, Kun; Kennedy, Kathleen E. – Canadian Journal of School Psychology, 2016
As the diversity of the school-age population in Canada continues to increase, it is important for school psychologists to consider the potential influence of culture and language when assessing the cognitive abilities of students from culturally and linguistically diverse backgrounds. The purpose of this study is to examine the linguistic demand…
Descriptors: Foreign Countries, Children, Intelligence Tests, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Košak-Babuder, Milena; Kormos, Judit; Ratajczak, Michael; Pižorn, Karmen – Language Testing, 2019
One of the special arrangements in testing contexts is to allow dyslexic students to listen to the text while they read. In our study, we investigated the effect of read-aloud assistance on young English learners' language comprehension scores. We also examined whether students with dyslexia identification benefit from this assistance differently…
Descriptors: Dyslexia, Identification, Scores, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Uner, Sinem; Akkus, Huseyin – Teacher Development, 2019
Students' perceptions are one of the sources that can be used to capture teachers' PCK. Therefore, the aim of this study was to develop a scale to determine secondary students' perceptions of their teachers' PCK. Validity and reliability studies were conducted with 659 students. Both exploratory factor analysis and confirmatory factor analysis…
Descriptors: Student Attitudes, Pedagogical Content Knowledge, Student Evaluation of Teacher Performance, Secondary School Teachers
Peer reviewed Peer reviewed
Direct linkDirect link
DiBattista, David; Sinnige-Egger, Jo-Anne; Fortuna, Glenda – Journal of Experimental Education, 2014
The authors assessed the effects of using "none of the above" as an option in a 40-item, general-knowledge multiple-choice test administered to undergraduate students. Examinees who selected "none of the above" were given an incentive to write the correct answer to the question posed. Using "none of the above" as the…
Descriptors: Multiple Choice Tests, Testing, Undergraduate Students, Test Items
Previous Page | Next Page »
Pages: 1  |  2  |  3