NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 112 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kárász, Judit T.; Széll, Krisztián; Takács, Szabolcs – Quality Assurance in Education: An International Perspective, 2023
Purpose: Based on the general formula, which depends on the length and difficulty of the test, the number of respondents and the number of ability levels, this study aims to provide a closed formula for the adaptive tests with medium difficulty (probability of solution is p = 1/2) to determine the accuracy of the parameters for each item and in…
Descriptors: Test Length, Probability, Comparative Analysis, Difficulty Level
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kim, Sooyeon; Walker, Michael – ETS Research Report Series, 2021
In this investigation, we used real data to assess potential differential effects associated with taking a test in a test center (TC) versus testing at home using remote proctoring (RP). We used a pseudo-equivalent groups (PEG) approach to examine group equivalence at the item level and the total score level. If our assumption holds that the PEG…
Descriptors: Testing, Distance Education, Comparative Analysis, Test Items
Benton, Tom – Research Matters, 2021
Computer adaptive testing is intended to make assessment more reliable by tailoring the difficulty of the questions a student has to answer to their level of ability. Most commonly, this benefit is used to justify the length of tests being shortened whilst retaining the reliability of a longer, non-adaptive test. Improvements due to adaptive…
Descriptors: Risk, Item Response Theory, Computer Assisted Testing, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Jinming; Li, Jie – Journal of Educational Measurement, 2016
An IRT-based sequential procedure is developed to monitor items for enhancing test security. The procedure uses a series of statistical hypothesis tests to examine whether the statistical characteristics of each item under inspection have changed significantly during CAT administration. This procedure is compared with a previously developed…
Descriptors: Computer Assisted Testing, Test Items, Difficulty Level, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Veldkamp, Bernard P. – Journal of Educational Measurement, 2016
Many standardized tests are now administered via computer rather than paper-and-pencil format. The computer-based delivery mode brings with it certain advantages. One advantage is the ability to adapt the difficulty level of the test to the ability level of the test taker in what has been termed computerized adaptive testing (CAT). A second…
Descriptors: Computer Assisted Testing, Reaction Time, Standardized Tests, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Kuang, Huan; Sahin, Fusun – Large-scale Assessments in Education, 2023
Background: Examinees may not make enough effort when responding to test items if the assessment has no consequence for them. These disengaged responses can be problematic in low-stakes, large-scale assessments because they can bias item parameter estimates. However, the amount of bias, and whether this bias is similar across administrations, is…
Descriptors: Test Items, Comparative Analysis, Mathematics Tests, Reaction Time
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kim, Sooyeon; Lu, Ru – ETS Research Report Series, 2018
The purpose of this study was to evaluate the effectiveness of linking test scores by using test takers' background data to form pseudo-equivalent groups (PEG) of test takers. Using 4 operational test forms that each included 100 items and were taken by more than 30,000 test takers, we created 2 half-length research forms that had either 20…
Descriptors: Test Items, Item Banks, Difficulty Level, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Lishan; VanLehn, Kurt – Interactive Learning Environments, 2021
Despite their drawback, multiple-choice questions are an enduring feature in instruction because they can be answered more rapidly than open response questions and they are easily scored. However, it can be difficult to generate good incorrect choices (called "distractors"). We designed an algorithm to generate distractors from a…
Descriptors: Semantics, Networks, Multiple Choice Tests, Teaching Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Moothedath, Shana; Chaporkar, Prasanna; Belur, Madhu N. – Perspectives in Education, 2016
In recent years, the computerised adaptive test (CAT) has gained popularity over conventional exams in evaluating student capabilities with desired accuracy. However, the key limitation of CAT is that it requires a large pool of pre-calibrated questions. In the absence of such a pre-calibrated question bank, offline exams with uncalibrated…
Descriptors: Guessing (Tests), Computer Assisted Testing, Adaptive Testing, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Shaw, Sebastian C. K.; Hennessy, Laura R.; Anderson, John L. – Advances in Health Sciences Education, 2022
Dyslexia is a Specific Learning Difficulty that impacts on reading and writing abilities. During the COVID-19 pandemic, medical schools have been forced to undertake distance learning and assessment. The wider literature suggested that e-learning might pose additional challenges for dyslexic students. Here we explore their overall experiences of…
Descriptors: Medical Students, Medical Education, COVID-19, Pandemics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yao, Don – English Language Teaching, 2020
Computer-based test (CBT) and paper-based test (PBT) are two test modes to the test takers that have been widely adopted in the field of language testing or assessment over the last few decades. Due to the rapid development of science and technology, it is a trend for universities and educational institutions striving rather hard to deliver the…
Descriptors: Language Tests, Computer Assisted Testing, Test Format, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
England, Benjamin D.; Ortegren, Francesca R.; Serra, Michael J. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2017
Framing metacognitive judgments of learning (JOLs) in terms of the likelihood of forgetting rather than remembering consistently yields a counterintuitive outcome: The mean of participants' forget-framed JOLs is often higher (after reverse-scoring) than the mean of their remember-framed JOLs, suggesting greater confidence in memory. In the present…
Descriptors: Metacognition, Evaluative Thinking, Learning, Memory
Peer reviewed Peer reviewed
Direct linkDirect link
Gonzalez-Barrero, Ana Maria; Nadig, Aparna S. – Child Development, 2019
This study investigated the effects of bilingualism on set-shifting and working memory in children with autism spectrum disorders (ASD). Bilinguals with ASD were predicted to display a specific bilingual advantage in set-shifting, but not working memory, relative to monolinguals with ASD. Forty 6- to 9-year-old children participated (20 ASD, 20…
Descriptors: Bilingualism, Autism, Pervasive Developmental Disorders, Short Term Memory
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Ahyoung Alicia; Tywoniw, Rurik L.; Chapman, Mark – Language Assessment Quarterly, 2022
Technology-enhanced items (TEIs) are innovative, computer-delivered test items that allow test takers to better interact with the test environment compared to traditional multiple-choice items (MCIs). The interactive nature of TEIs offer improved construct coverage compared with MCIs but little research exists regarding students' performance on…
Descriptors: Language Tests, Test Items, Computer Assisted Testing, English (Second Language)
Liu, Junhui; Brown, Terran; Chen, Jianshen; Ali, Usama; Hou, Likun; Costanzo, Kate – Partnership for Assessment of Readiness for College and Careers, 2016
The Partnership for Assessment of Readiness for College and Careers (PARCC) is a state-led consortium working to develop next-generation assessments that more accurately, compared to previous assessments, measure student progress toward college and career readiness. The PARCC assessments include both English Language Arts/Literacy (ELA/L) and…
Descriptors: Testing, Achievement Tests, Test Items, Test Bias
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8