NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 616 to 630 of 9,533 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kuijpers, Renske E.; Visser, Ingmar; Molenaar, Dylan – Journal of Educational and Behavioral Statistics, 2021
Mixture models have been developed to enable detection of within-subject differences in responses and response times to psychometric test items. To enable mixture modeling of both responses and response times, a distributional assumption is needed for the within-state response time distribution. Since violations of the assumed response time…
Descriptors: Test Items, Responses, Reaction Time, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Gao, Xuliang; Ma, Wenchao; Wang, Daxun; Cai, Yan; Tu, Dongbo – Journal of Educational and Behavioral Statistics, 2021
This article proposes a class of cognitive diagnosis models (CDMs) for polytomously scored items with different link functions. Many existing polytomous CDMs can be considered as special cases of the proposed class of polytomous CDMs. Simulation studies were carried out to investigate the feasibility of the proposed CDMs and the performance of…
Descriptors: Cognitive Measurement, Models, Test Items, Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kezer, Fatih – International Journal of Assessment Tools in Education, 2021
Item response theory provides various important advantages for exams carried out or to be carried out digitally. For computerized adaptive tests to be able to make valid and reliable predictions supported by IRT, good quality item pools should be used. This study examines how adaptive test applications vary in item pools which consist of items…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Akin-Arikan, Çigdem; Gelbal, Selahattin – Eurasian Journal of Educational Research, 2021
Purpose: This study aims to compare the performances of Item Response Theory (IRT) equating and kernel equating (KE) methods based on equating errors (RMSD) and standard error of equating (SEE) using the anchor item nonequivalent groups design. Method: Within this scope, a set of conditions, including ability distribution, type of anchor items…
Descriptors: Equated Scores, Item Response Theory, Test Items, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Clariana, Roy B.; Park, Eunsung – Educational Technology Research and Development, 2021
Cognitive and metacognitive processes during learning depend on accurate monitoring, this investigation examines the influence of immediate item-level knowledge of correct response feedback on cognition monitoring accuracy. In an optional end-of-course computer-based review lesson, participants (n = 68) were randomly assigned to groups to receive…
Descriptors: Feedback (Response), Cognitive Processes, Accuracy, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Schulte, Niklas; Holling, Heinz; Bürkner, Paul-Christian – Educational and Psychological Measurement, 2021
Forced-choice questionnaires can prevent faking and other response biases typically associated with rating scales. However, the derived trait scores are often unreliable and ipsative, making interindividual comparisons in high-stakes situations impossible. Several studies suggest that these problems vanish if the number of measured traits is high.…
Descriptors: Questionnaires, Measurement Techniques, Test Format, Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Qian, Jiahe; Gu, Lixiong; Li, Shuhong – ETS Research Report Series, 2019
In assembling testlets (i.e., test forms) with a pool of new and used item blocks, test security is one of the main issues of concern. Strict constraints are often imposed on repeated usage of the same item blocks. Nevertheless, for an assessment administering multiple testlets, a goal is to select as large a sample of testlets as possible. In…
Descriptors: Test Construction, Sampling, Test Items, Mathematics
Peer reviewed Peer reviewed
Direct linkDirect link
Svenja Woitt; Joshua Weidlich; Ioana Jivet; Derya Orhan Göksün; Hendrik Drachsler; Marco Kalz – Teaching in Higher Education, 2025
Given the crucial role of feedback in supporting learning in higher education, understanding the factors influencing feedback effectiveness is imperative. Student feedback literacy, that is, the set of attitudes and abilities to make sense of and utilize feedback is therefore considered a key concept. Rigorous investigations of feedback literacy…
Descriptors: Feedback (Response), Higher Education, Multiple Literacies, Teacher Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Roger Young; Emily Courtney; Alexander Kah; Mariah Wilkerson; Yi-Hsin Chen – Teaching of Psychology, 2025
Background: Multiple-choice item (MCI) assessments are burdensome for instructors to develop. Artificial intelligence (AI, e.g., ChatGPT) can streamline the process without sacrificing quality. The quality of AI-generated MCIs and human experts is comparable. However, whether the quality of AI-generated MCIs is equally good across various domain-…
Descriptors: Item Response Theory, Multiple Choice Tests, Psychology, Textbooks
Peer reviewed Peer reviewed
Direct linkDirect link
Goran Trajkovski; Heather Hayes – Digital Education and Learning, 2025
This book explores the transformative role of artificial intelligence in educational assessment, catering to researchers, educators, administrators, policymakers, and technologists involved in shaping the future of education. It delves into the foundations of AI-assisted assessment, innovative question types and formats, data analysis techniques,…
Descriptors: Artificial Intelligence, Educational Assessment, Computer Uses in Education, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Abebe Tewachew – GIST Education and Learning Research Journal, 2025
An essential component of language instruction is classroom-based assessment, which is used to inform instructional decisions and gauge student progress. The current study explores how EFL teachers visualize developing classroom-based assessments at Debark Secondary Schools in the North Gondar Zone. The study employed a concurrent parallel…
Descriptors: Foreign Countries, English (Second Language), Second Language Learning, Second Language Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Shuchen Guo; Lehong Shi; Xiaoming Zhai – Education and Information Technologies, 2025
As artificial intelligence (AI) receives wider attention in education, examining teachers' acceptance of AI (TAAI) becomes essential. However, existing instruments measuring TAAI reported limited validity evidence and faced some design challenges, such as missing informed definitions of AI to participants. To fill this gap, this study developed…
Descriptors: Artificial Intelligence, Technology Uses in Education, Teacher Attitudes, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Jiayi Deng – Large-scale Assessments in Education, 2025
Background: Test score comparability in international large-scale assessments (LSAs) is greatly important to ensure test fairness. To effectively compare test scores on an international scale, score linking is widely used to convert raw scores from different linguistic version of test forms into a common score scale. An example is the multigroup…
Descriptors: Guessing (Tests), Item Response Theory, Error Patterns, Arabic
Peer reviewed Peer reviewed
Direct linkDirect link
Laila El-Hamamsy; María Zapata-Cáceres; Estefanía Martín-Barroso; Francesco Mondada; Jessica Dehler Zufferey; Barbara Bruno; Marcos Román-González – Technology, Knowledge and Learning, 2025
The introduction of computing education into curricula worldwide requires multi-year assessments to evaluate the long-term impact on learning. However, no single Computational Thinking (CT) assessment spans primary school, and no group of CT assessments provides a means of transitioning between instruments. This study therefore investigated…
Descriptors: Cognitive Tests, Computation, Thinking Skills, Test Validity
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sara T. Cushing – ETS Research Report Series, 2025
This report provides an in-depth comparison of TOEFL iBT® and the Duolingo English Test (DET) in terms of the degree to which both tests assess academic language proficiency in listening, reading, writing, and speaking. The analysis is based on publicly available documentation on both tests, including sample test questions available on the test…
Descriptors: Language Tests, English (Second Language), Second Language Learning, Academic Language
Pages: 1  |  ...  |  38  |  39  |  40  |  41  |  42  |  43  |  44  |  45  |  46  |  ...  |  636