NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 196 to 210 of 1,057 results Save | Export
Nixi Wang – ProQuest LLC, 2022
Measurement errors attributable to cultural issues are complex and challenging for educational assessments. We need assessment tests sensitive to the cultural heterogeneity of populations, and psychometric methods appropriate to address fairness and equity concerns. Built on the research of culturally responsive assessment, this dissertation…
Descriptors: Culturally Relevant Education, Testing, Equal Education, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Kao, Yu-Ting; Kuo, Hung-Chih – Interactive Learning Environments, 2023
This study implemented the principles of dynamic assessment (DA) with computer technology, iSpring Quiz Maker, to (1) identify the English listening difficulties of 172 L2 English learners; (2) diagnose their individual learning needs, and (3) promote their future potential abilities. Upon evaluating the participating junior high school students'…
Descriptors: Listening Comprehension Tests, English (Second Language), Second Language Learning, Second Language Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Cohen, Dale J.; Ballman, Alesha; Rijmen, Frank; Cohen, Jon – Applied Measurement in Education, 2020
Computer-based, pop-up glossaries are perhaps the most promising accommodation aimed at mitigating the influence of linguistic structure and cultural bias on the performance of English Learner (EL) students on statewide assessments. To date, there is no established procedure for identifying the words that require a glossary for EL students that is…
Descriptors: Glossaries, Testing Accommodations, English Language Learners, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Tsaousis, Ioannis; Sideridis, Georgios D.; AlGhamdi, Hannan M. – Journal of Psychoeducational Assessment, 2021
This study evaluated the psychometric quality of a computerized adaptive testing (CAT) version of the general cognitive ability test (GCAT), using a simulation study protocol put forth by Han, K. T. (2018a). For the needs of the analysis, three different sets of items were generated, providing an item pool of 165 items. Before evaluating the…
Descriptors: Computer Assisted Testing, Adaptive Testing, Cognitive Tests, Cognitive Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Ponce, Héctor R.; Mayer, Richard E.; Loyola, María Soledad – Journal of Educational Computing Research, 2021
One of the most common technology-enhanced items used in large-scale K-12 testing programs is the drag-and-drop response interaction. The main research questions in this study are: (a) Does adding a drag-and-drop interface to an online test affect the accuracy of student performance? (b) Does adding a drag-and-drop interface to an online test…
Descriptors: Computer Assisted Testing, Test Construction, Standardized Tests, Elementary School Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gangur, Mikuláš; Plevny, Miroslav – Journal on Efficiency and Responsibility in Education and Science, 2018
The paper presents a possible way of solving the problem of creating more test variants for a large number of students divided into groups. The proposed solution may consist in introducing a parameterized automatic test generator. The principle of an automatic parameterized test generator is shown. The process of the question tree construction…
Descriptors: Computer Assisted Testing, Test Construction, Test Items, Heuristics
Peer reviewed Peer reviewed
Direct linkDirect link
Lundgren, Erik; Eklöf, Hanna – Educational Research and Evaluation, 2020
The present study used process data from a computer-based problem-solving task as indications of behavioural level of test-taking effort, and explored how behavioural item-level effort related to overall test performance and self-reported effort. Variables were extracted from raw process data and clustered. Four distinct clusters were obtained and…
Descriptors: Computer Assisted Testing, Problem Solving, Response Style (Tests), Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Stack, Anna; Boitshwarelo, Bopelo; Reedy, Alison; Billany, Trevor; Reedy, Hannah; Sharma, Rajeev; Vemuri, Jyoti – Australasian Journal of Educational Technology, 2020
While research on online tests in higher education is steadily growing, there is little evidence in the literature of the use of learning management systems (LMS), such as Blackboard™, as rich sources of data on online tests practices. This paper reports on an investigation that used data from Blackboard™ LMS to gain insight into the purpose for…
Descriptors: Computer Assisted Testing, Integrated Learning Systems, Higher Education, Business Schools
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wang, Yan; Murphy, Kevin B. – National Center for Education Statistics, 2020
In 2018, the National Center for Education Statistics (NCES) administered two assessments--the National Assessment of Educational Progress (NAEP) Technology and Engineering Literacy (TEL) assessment and the International Computer and Information Literacy Study (ICILS)--to two separate nationally representative samples of 8th-grade students in the…
Descriptors: National Competency Tests, International Assessment, Computer Literacy, Information Literacy
Peer reviewed Peer reviewed
Direct linkDirect link
Rivas, Axel; Scasso, Martín Guillermo – Journal of Education Policy, 2021
Since 2000, the PISA test implemented by OECD has become the prime benchmark for international comparisons in education. The 2015 PISA edition introduced methodological changes that altered the nature of its results. PISA made no longer valid non-reached items of the final part of the test, assuming that those unanswered questions were more a…
Descriptors: Test Validity, Computer Assisted Testing, Foreign Countries, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Dirkx, K. J. H.; Skuballa, I.; Manastirean-Zijlstra, C. S.; Jarodzka, H. – Instructional Science: An International Journal of the Learning Sciences, 2021
The use of computer-based tests (CBTs), for both formative and summative purposes, has greatly increased over the past years. One major advantage of CBTs is the easy integration of multimedia. It is unclear, though, how to design such CBT environments with multimedia. The purpose of the current study was to examine whether guidelines for designing…
Descriptors: Test Construction, Computer Assisted Testing, Multimedia Instruction, Eye Movements
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Qiao Wang; Ralph L. Rose; Ayaka Sugawara; Naho Orita – Vocabulary Learning and Instruction, 2025
VocQGen is an automated tool designed to generate multiple-choice cloze (MCC) questions for vocabulary assessment in second language learning contexts. It leverages several natural language processing (NLP) tools and OpenAI's GPT-4 model to produce MCC items quickly from user-specified word lists. To evaluate its effectiveness, we used the first…
Descriptors: Vocabulary Skills, Artificial Intelligence, Computer Software, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Joo, Seang-Hwane; Lee, Philseok; Stark, Stephen – Journal of Educational Measurement, 2018
This research derived information functions and proposed new scalar information indices to examine the quality of multidimensional forced choice (MFC) items based on the RANK model. We also explored how GGUM-RANK information, latent trait recovery, and reliability varied across three MFC formats: pairs (two response alternatives), triplets (three…
Descriptors: Item Response Theory, Models, Item Analysis, Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Bao, Yu; Bradshaw, Laine – Measurement: Interdisciplinary Research and Perspectives, 2018
Diagnostic classification models (DCMs) can provide multidimensional diagnostic feedback about students' mastery levels of knowledge components or attributes. One advantage of using DCMs is the ability to accurately and reliably classify students into mastery levels with a relatively small number of items per attribute. Combining DCMs with…
Descriptors: Test Items, Selection, Adaptive Testing, Computer Assisted Testing
Lin, Ye – ProQuest LLC, 2018
With the widespread use of technology in the assessment field, many testing programs use both computer-based tests (CBTs) and paper-and-pencil tests (PPTs). Both the Standards for Educational and Psychological Testing (AERA, APA, & NCME, 2014) and the International Guidelines on Computer-Based and Internet Delivered Testing (International Test…
Descriptors: Computer Assisted Testing, Testing, Student Evaluation, Elementary School Students
Pages: 1  |  ...  |  10  |  11  |  12  |  13  |  14  |  15  |  16  |  17  |  18  |  ...  |  71