NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 70 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Hung Tan Ha; Duyen Thi Bich Nguyen; Tim Stoeckel – Language Assessment Quarterly, 2025
This article compares two methods for detecting local item dependence (LID): residual correlation examination and Rasch testlet modeling (RTM), in a commonly used 3:6 matching format and an extended matching test (EMT) format. The two formats are hypothesized to facilitate different levels of item dependency due to differences in the number of…
Descriptors: Comparative Analysis, Language Tests, Test Items, Item Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Pentecost, Thomas C.; Raker, Jeffery R.; Murphy, Kristen L. – Practical Assessment, Research & Evaluation, 2023
Using multiple versions of an assessment has the potential to introduce item environment effects. These types of effects result in version dependent item characteristics (i.e., difficulty and discrimination). Methods to detect such effects and resulting implications are important for all levels of assessment where multiple forms of an assessment…
Descriptors: Item Response Theory, Test Items, Test Format, Science Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Corrin Moss; Sharon Kwabi; Scott P. Ardoin; Katherine S. Binder – Reading and Writing: An Interdisciplinary Journal, 2024
The ability to form a mental model of a text is an essential component of successful reading comprehension (RC), and purpose for reading can influence mental model construction. Participants were assigned to one of two conditions during an RC test to alter their purpose for reading: concurrent (texts and questions were presented simultaneously)…
Descriptors: Eye Movements, Reading Comprehension, Test Format, Short Term Memory
Peer reviewed Peer reviewed
Direct linkDirect link
Srikanth Allamsetty; M. V. S. S. Chandra; Neelima Madugula; Byamakesh Nayak – IEEE Transactions on Learning Technologies, 2024
The present study is related to the problem associated with student assessment with online examinations at higher educational institutes (HEIs). With the current COVID-19 outbreak, the majority of educational institutes are conducting online examinations to assess their students, where there would always be a chance that the students go for…
Descriptors: Computer Assisted Testing, Accountability, Higher Education, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Zhao, Wenbo; Li, Jiaojiao; Shanks, David R.; Li, Baike; Hu, Xiao; Yang, Chunliang; Luo, Liang – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2023
Making metamemory judgments reactively changes item memory itself. Here we report the first investigation of reactive influences of making judgments of learning (JOLs) on interitem relational memory--specifically, temporal (serial) order memory. Experiment 1 found that making JOLs impaired order reconstruction. Experiment 2 observed minimal…
Descriptors: Metacognition, Memory, Meta Analysis, Recall (Psychology)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jang, Jung Un; Kim, Eun Joo – Journal of Curriculum and Teaching, 2022
This study conducts the validity of the pen-and-paper and smart-device-based tests on optician's examination. The developed questions for each media were based on the national optician's simulation test. The subjects of this study were 60 students enrolled in E University. The data analysis was performed to verify the equivalence of the two…
Descriptors: Optometry, Licensing Examinations (Professions), Test Format, Test Validity
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mimi Ismail; Ahmed Al - Badri; Said Al - Senaidi – Journal of Education and e-Learning Research, 2025
This study aimed to reveal the differences in individuals' abilities, their standard errors, and the psychometric properties of the test according to the two methods of applying the test (electronic and paper). The descriptive approach was used to achieve the study's objectives. The study sample consisted of 74 male and female students at the…
Descriptors: Achievement Tests, Computer Assisted Testing, Psychometrics, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Guo, Hongwen; Rios, Joseph A.; Ling, Guangming; Wang, Zhen; Gu, Lin; Yang, Zhitong; Liu, Lydia O. – ETS Research Report Series, 2022
Different variants of the selected-response (SR) item type have been developed for various reasons (i.e., simulating realistic situations, examining critical-thinking and/or problem-solving skills). Generally, the variants of SR item format are more complex than the traditional multiple-choice (MC) items, which may be more challenging to test…
Descriptors: Test Format, Test Wiseness, Test Items, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Vida, Leonardo J.; Bolsinova, Maria; Brinkhuis, Matthieu J. S. – International Educational Data Mining Society, 2021
The quality of exams drives test-taking behavior of examinees and is a proxy for the quality of teaching. As most university exams have strict time limits, and speededness is an important measure of the cognitive state of examinees, this might be used to assess the connection between exams' quality and examinees' performance. The practice of…
Descriptors: Accuracy, Test Items, Tests, Student Behavior
Peer reviewed Peer reviewed
Direct linkDirect link
Park, Yena; Lee, Senyung; Shin, Sun-Young – Language Testing, 2022
Despite consistent calls for authentic stimuli in listening tests for better construct representation, unscripted texts have been rarely adopted in high-stakes listening tests due to perceived inefficiency. This study details how a local academic listening test was developed using authentic unscripted audio-visual texts from the local target…
Descriptors: Listening Comprehension Tests, English for Academic Purposes, Test Construction, Foreign Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sharareh Sadat Sarsarabi; Zeinab Sazegar – International Journal of Language Testing, 2023
The statement stated in a multiple-choice question can be developed regarding two types of sentences: Interruptive (periodic) and cumulative (or loose). This study deals with different kinds of stems in designing multiple-choice (MC) items. To fill the existing gap in the literature, two groups of teacher students passing general English courses…
Descriptors: Language Tests, Test Format, Multiple Choice Tests, Student Placement
Peer reviewed Peer reviewed
Direct linkDirect link
Moon, Jung Aa; Keehner, Madeleine; Katz, Irvin R. – Educational Assessment, 2020
We investigated how item formats influence test takers' response tendencies under uncertainty. Adult participants solved content-equivalent math items in three formats: multiple-selection multiple-choice, grid with forced-choice (true-false) options, and grid with non-forced-choice options. Participants showed a greater tendency to commit (rather…
Descriptors: College Students, Test Wiseness, Test Format, Test Items
Al-Jarf, Reima – Online Submission, 2023
This article aims to give a comprehensive guide to planning and designing vocabulary tests which include Identifying the skills to be covered by the test; outlining the course content covered; preparing a table of specifications that shows the skill, content topics and number of questions allocated to each; and preparing the test instructions. The…
Descriptors: Vocabulary Development, Learning Processes, Test Construction, Course Content
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ben Seipel; Sarah E. Carlson; Virginia Clinton-Lisell; Mark L. Davison; Patrick C. Kennedy – Grantee Submission, 2022
Originally designed for students in Grades 3 through 5, MOCCA (formerly the Multiple-choice Online Causal Comprehension Assessment), identifies students who struggle with comprehension, and helps uncover why they struggle. There are many reasons why students might not comprehend what they read. They may struggle with decoding, or reading words…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Diagnostic Tests, Reading Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Höhne, Jan Karem; Schlosser, Stephan; Krebs, Dagmar – Field Methods, 2017
Measuring attitudes and opinions employing agree/disagree (A/D) questions is a common method in social research because it appears to be possible to measure different constructs with identical response scales. However, theoretical considerations suggest that A/D questions require a considerable cognitive processing. Item-specific (IS) questions,…
Descriptors: Online Surveys, Test Format, Test Items, Difficulty Level
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5