NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 63 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Hung Tan Ha; Duyen Thi Bich Nguyen; Tim Stoeckel – Language Assessment Quarterly, 2025
This article compares two methods for detecting local item dependence (LID): residual correlation examination and Rasch testlet modeling (RTM), in a commonly used 3:6 matching format and an extended matching test (EMT) format. The two formats are hypothesized to facilitate different levels of item dependency due to differences in the number of…
Descriptors: Comparative Analysis, Language Tests, Test Items, Item Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
McGuire, Michael J. – International Journal for the Scholarship of Teaching and Learning, 2023
College students in a lower-division psychology course made metacognitive judgments by predicting and postdicting performance for true-false, multiple-choice, and fill-in-the-blank question sets on each of three exams. This study investigated which question format would result in the most accurate metacognitive judgments. Extending Koriat's (1997)…
Descriptors: Metacognition, Multiple Choice Tests, Accuracy, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Peter A. Edelsbrunner; Bianca A. Simonsmeier; Michael Schneider – Educational Psychology Review, 2025
Knowledge is an important predictor and outcome of learning and development. Its measurement is challenged by the fact that knowledge can be integrated and homogeneous, or fragmented and heterogeneous, which can change through learning. These characteristics of knowledge are at odds with current standards for test development, demanding a high…
Descriptors: Meta Analysis, Predictor Variables, Learning Processes, Knowledge Level
Peer reviewed Peer reviewed
Direct linkDirect link
Kárász, Judit T.; Széll, Krisztián; Takács, Szabolcs – Quality Assurance in Education: An International Perspective, 2023
Purpose: Based on the general formula, which depends on the length and difficulty of the test, the number of respondents and the number of ability levels, this study aims to provide a closed formula for the adaptive tests with medium difficulty (probability of solution is p = 1/2) to determine the accuracy of the parameters for each item and in…
Descriptors: Test Length, Probability, Comparative Analysis, Difficulty Level
Peer reviewed Peer reviewed
PDF on ERIC Download full text
David Bell; Vikki O'Neill; Vivienne Crawford – Practitioner Research in Higher Education, 2023
We compared the influence of open-book extended duration versus closed book time-limited format on reliability and validity of written assessments of pharmacology learning outcomes within our medical and dental courses. Our dental cohort undertake a mid-year test (30xfree-response short answer to a question, SAQ) and end-of-year paper (4xSAQ,…
Descriptors: Undergraduate Students, Pharmacology, Pharmaceutical Education, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Öztürk, Nagihan Boztunç – Universal Journal of Educational Research, 2019
In this study, how the length and characteristics of routing module in different panel designs affect measurement precision is examined. In the scope of the study, six different routing module length, nine different routing module characteristics, and two different panel design are handled. At the end of the study, the effects of conditions on…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Length, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
O'Grady, Stefan – Language Teaching Research, 2023
The current study explores the impact of varying multiple-choice question preview and presentation formats in a test of second language listening proficiency targeting different levels of text comprehension. In a between-participant design, participants completed a 30-item test of listening comprehension featuring implicit and explicit information…
Descriptors: Language Tests, Multiple Choice Tests, Scores, Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Aryadoust, Vahid – Computer Assisted Language Learning, 2020
The aim of the present study is two-fold. First, it uses eye-tracking to investigate the dynamics of item reading, both in multiple choice and matching items, before and during two hearings of listening passages in a computerized while-listening performance (WLP) test. Second, it investigates answer changing during the two hearings, which include…
Descriptors: Eye Movements, Test Items, Secondary School Students, Reading Processes
Peer reviewed Peer reviewed
Direct linkDirect link
O'Grady, Stefan – Innovation in Language Learning and Teaching, 2023
Purpose: The current study applies an innovative approach to the assessment of second language listening comprehension skills. This is an important focus in need of innovation because scores generated through language assessment tasks should reflect variation in the target skill and the literature broadly suggests that conventional methods of…
Descriptors: Listening Comprehension, Second Language Learning, Correlation, English (Second Language)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Haug, Tobias; Ebling, Sarah; Braem, Penny Boyes; Tissi, Katja; Sidler-Miserez, Sandra – Language Education & Assessment, 2019
In German Switzerland the learning and assessment of Swiss German Sign Language ("Deutschschweizerische Gebärdensprache," DSGS) takes place in different contexts, for example, in tertiary education or in continuous education courses. By way of the still ongoing implementation of the Common European Framework of Reference for DSGS,…
Descriptors: German, Sign Language, Language Tests, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
King, Rosemary; Blayney, Paul; Sweller, John – Accounting Education, 2021
This study offers evidence of the impact of language background on the performance of students enrolled in an accounting study unit. It aims to quantify the effects of language background on performance in essay questions, compared to calculation questions requiring an application of procedures. Marks were collected from 2850 students. The results…
Descriptors: Cognitive Ability, Accounting, Native Language, Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
McLean, Stuart; Stewart, Jeffrey; Batty, Aaron Olaf – Language Testing, 2020
Vocabulary's relationship to reading proficiency is frequently cited as a justification for the assessment of L2 written receptive vocabulary knowledge. However, to date, there has been relatively little research regarding which modalities of vocabulary knowledge have the strongest correlations to reading proficiency, and observed differences have…
Descriptors: Prediction, Reading Tests, Language Proficiency, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Yan; Kim, Eun Sook; Dedrick, Robert F.; Ferron, John M.; Tan, Tony – Educational and Psychological Measurement, 2018
Wording effects associated with positively and negatively worded items have been found in many scales. Such effects may threaten construct validity and introduce systematic bias in the interpretation of results. A variety of models have been applied to address wording effects, such as the correlated uniqueness model and the correlated traits and…
Descriptors: Test Items, Test Format, Correlation, Construct Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Scott, Terry F.; Schumayer, Dániel – Physical Review Physics Education Research, 2017
The Force Concept Inventory is one of the most popular and most analyzed multiple-choice concept tests used to investigate students' understanding of Newtonian mechanics. The correct answers poll a set of underlying Newtonian concepts and the coherence of these underlying concepts has been found in the data. However, this inventory was constructed…
Descriptors: World Views, Scientific Concepts, Scientific Principles, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Constantinou, Filio – Cambridge Journal of Education, 2020
Written examinations represent one of the most common assessment tools in education. Though typically perceived as measurement instruments, written examinations are primarily texts that perform a communicative function. To complement existing research, this study viewed written examinations as a distinct form of communication (i.e. 'register').…
Descriptors: Sociolinguistics, Linguistic Theory, Test Items, Item Analysis
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5