NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 136 to 150 of 1,057 results Save | Export
Crisp, Victoria; Shaw, Stuart – Research Matters, 2020
For assessment contexts where both a paper-based test and an on-screen assessment are available as alternatives, it is still common for the paper-based test to be prepared first with questions later transferred into an on-screen testing platform. One challenge with this is that some questions cannot be transferred. One solution might be for…
Descriptors: Computer Assisted Testing, Test Items, Test Construction, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Petersen, Lara Aylin; Leue, Anja – Applied Cognitive Psychology, 2021
The Cambridge Face Memory Test Long (CFMT+) is used to investigate extraordinary face recognition abilities (super-recognizers [SR]). Whether lab and online presentation of the CFMT+ lead to different test performance has not yet been investigated. Furthermore, we wanted to investigate psychometric properties of the CFMT+ and the Glasgow face…
Descriptors: Recognition (Psychology), Human Body, Cognitive Tests, Psychometrics
Maddox, Bryan – OECD Publishing, 2023
The digital transition in educational testing has introduced many new opportunities for technology to enhance large-scale assessments. These include the potential to collect and use log data on test-taker response processes routinely, and on a large scale. Process data has long been recognised as a valuable source of validation evidence in…
Descriptors: Measurement, Inferences, Test Reliability, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ayfer Sayin; Sabiha Bozdag; Mark J. Gierl – International Journal of Assessment Tools in Education, 2023
The purpose of this study is to generate non-verbal items for a visual reasoning test using templated-based automatic item generation (AIG). The fundamental research method involved following the three stages of template-based AIG. An item from the 2016 4th-grade entrance exam of the Science and Art Center (known as BILSEM) was chosen as the…
Descriptors: Test Items, Test Format, Nonverbal Tests, Visual Measures
Peer reviewed Peer reviewed
Direct linkDirect link
Thomas Bickerton, Robert; Sangwin, Chris J. – International Journal of Mathematical Education in Science and Technology, 2022
We discuss a practical method for assessing mathematical proof online. We examine the use of faded worked examples and reading comprehension questions to understand proof. By breaking down a given proof, we formulate a checklist that can be used to generate comprehension questions which can be assessed automatically online. We then provide some…
Descriptors: Mathematics Instruction, Validity, Mathematical Logic, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Shin, Jinnie; Gierl, Mark J. – International Journal of Testing, 2022
Over the last five years, tremendous strides have been made in advancing the AIG methodology required to produce items in diverse content areas. However, the one content area where enormous problems remain unsolved is language arts, generally, and reading comprehension, more specifically. While reading comprehension test items can be created using…
Descriptors: Reading Comprehension, Test Construction, Test Items, Natural Language Processing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gorney, Kylie; Wollack, James A. – Practical Assessment, Research & Evaluation, 2022
Unlike the traditional multiple-choice (MC) format, the discrete-option multiple-choice (DOMC) format does not necessarily reveal all answer options to an examinee. The purpose of this study was to determine whether the reduced exposure of item content affects test security. We conducted an experiment in which participants were allowed to view…
Descriptors: Test Items, Test Format, Multiple Choice Tests, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Ted M. Clark; Daniel A. Turner; Darian C. Rostam – Journal of Chemical Education, 2022
Administering exams in large enrollment courses is challenging and systems in place for accomplishing this task were upended in the spring of 2020 when a sudden transformation to online instruction and testing occurred due to the COVID-19 pandemic. In the following year, when courses remained online, approaches to improve exam security included…
Descriptors: Chemistry, Science Instruction, Supervision, Computer Assisted Testing
Matthias von Davier, Editor; Ann Kennedy, Editor – International Association for the Evaluation of Educational Achievement, 2024
The Progress in International Reading Literacy Study (PIRLS) has been monitoring international trends in reading achievement among fourth-grade students for 25 years. As a critical point in a student's education, the fourth year of schooling establishes the foundations of literacy, with reading becoming increasingly central to learning across all…
Descriptors: Reading Achievement, Foreign Countries, Grade 4, International Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Berger, Stéphanie; Verschoor, Angela J.; Eggen, Theo J. H. M.; Moser, Urs – Journal of Educational Measurement, 2019
Calibration of an item bank for computer adaptive testing requires substantial resources. In this study, we investigated whether the efficiency of calibration under the Rasch model could be enhanced by improving the match between item difficulty and student ability. We introduced targeted multistage calibration designs, a design type that…
Descriptors: Simulation, Computer Assisted Testing, Test Items, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Nordenswan, Elisabeth; Kataja, Eeva-Leena; Deater-Deckard, Kirby; Korja, Riikka; Karrasch, Mira; Laine, Matti; Karlsson, Linnea; Karlsson, Hasse – SAGE Open, 2020
This study tested whether executive functioning (EF)/learning tasks from the CogState computerized test battery show a unitary latent structure. This information is important for the construction of composite measures on these tasks for applied research purposes. Based on earlier factor analytic research, we identified five CogState tasks that…
Descriptors: Executive Function, Cognitive Tests, Test Items, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Fadillah, Sarah Meilani; Ha, Minsu; Nuraeni, Eni; Indriyanti, Nurma Yunita – Malaysian Journal of Learning and Instruction, 2023
Purpose: Researchers discovered that when students were given the opportunity to change their answers, a majority changed their responses from incorrect to correct, and this change often increased the overall test results. What prompts students to modify their answers? This study aims to examine the modification of scientific reasoning test, with…
Descriptors: Science Tests, Multiple Choice Tests, Test Items, Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
Takahiro Terao – Applied Measurement in Education, 2024
This study aimed to compare item characteristics and response time between stimulus conditions in computer-delivered listening tests. Listening materials had three variants: regular videos, frame-by-frame videos, and only audios without visuals. Participants were 228 Japanese high school students who were requested to complete one of nine…
Descriptors: Computer Assisted Testing, Audiovisual Aids, Reaction Time, High School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Dave Kush; Anne Dahl; Filippa Lindahl – Second Language Research, 2024
Embedded questions (EQs) are islands for filler--gap dependency formation in English, but not in Norwegian. Kush and Dahl (2022) found that first language (L1) Norwegian participants often accepted filler-gap dependencies into EQs in second language (L2) English, and proposed that this reflected persistent transfer from Norwegian of the functional…
Descriptors: Transfer of Training, Norwegian, Native Language, Grammar
Peer reviewed Peer reviewed
Direct linkDirect link
Ozdemir, Burhanettin; Gelbal, Selahattin – Education and Information Technologies, 2022
The computerized adaptive tests (CAT) apply an adaptive process in which the items are tailored to individuals' ability scores. The multidimensional CAT (MCAT) designs differ in terms of different item selection, ability estimation, and termination methods being used. This study aims at investigating the performance of the MCAT designs used to…
Descriptors: Scores, Computer Assisted Testing, Test Items, Language Proficiency
Pages: 1  |  ...  |  6  |  7  |  8  |  9  |  10  |  11  |  12  |  13  |  14  |  ...  |  71