Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 6 |
Since 2016 (last 10 years) | 9 |
Since 2006 (last 20 years) | 14 |
Descriptor
Computer Assisted Testing | 16 |
Item Response Theory | 16 |
Undergraduate Students | 16 |
Foreign Countries | 8 |
Test Items | 7 |
Comparative Analysis | 6 |
Test Format | 5 |
Difficulty Level | 4 |
Item Analysis | 4 |
Psychometrics | 4 |
Scores | 4 |
More ▼ |
Source
Author
Publication Type
Journal Articles | 15 |
Reports - Research | 15 |
Tests/Questionnaires | 2 |
Information Analyses | 1 |
Reports - Evaluative | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Higher Education | 14 |
Postsecondary Education | 12 |
Secondary Education | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
College Board Achievement… | 1 |
Defining Issues Test | 1 |
Eysenck Personality Inventory | 1 |
Minnesota Multiphasic… | 1 |
What Works Clearinghouse Rating
Che Lah, Noor Hidayah; Tasir, Zaidatun; Jumaat, Nurul Farhana – Educational Studies, 2023
The aim of the study was to evaluate the extended version of the Problem-Solving Inventory (PSI) via an online learning setting known as the Online Problem-Solving Inventory (OPSI) through the lens of Rasch Model analysis. To date, there is no extended version of the PSI for online settings even though many researchers have used it; thus, this…
Descriptors: Problem Solving, Measures (Individuals), Electronic Learning, Item Response Theory
Liu, Kai; Zhang, Longfei; Tu, Dongbo; Cai, Yan – SAGE Open, 2022
We aimed to develop an item bank of computerized adaptive testing for eating disorders (CAT-ED) in Chinese university students to increase measurement precision and improve test efficiency. A total of 1,025 Chinese undergraduate respondents answered a series of questions about eating disorders in a paper-pencil test. A total of 133 items from four…
Descriptors: Item Analysis, Eating Disorders, Computer Assisted Testing, Goodness of Fit
Gorney, Kylie; Wollack, James A. – Practical Assessment, Research & Evaluation, 2022
Unlike the traditional multiple-choice (MC) format, the discrete-option multiple-choice (DOMC) format does not necessarily reveal all answer options to an examinee. The purpose of this study was to determine whether the reduced exposure of item content affects test security. We conducted an experiment in which participants were allowed to view…
Descriptors: Test Items, Test Format, Multiple Choice Tests, Item Analysis
Fadillah, Sarah Meilani; Ha, Minsu; Nuraeni, Eni; Indriyanti, Nurma Yunita – Malaysian Journal of Learning and Instruction, 2023
Purpose: Researchers discovered that when students were given the opportunity to change their answers, a majority changed their responses from incorrect to correct, and this change often increased the overall test results. What prompts students to modify their answers? This study aims to examine the modification of scientific reasoning test, with…
Descriptors: Science Tests, Multiple Choice Tests, Test Items, Decision Making
Conejo, Ricardo; Barros, Beatriz; Bertoa, Manuel F. – IEEE Transactions on Learning Technologies, 2019
This paper presents an innovative method to tackle the automatic evaluation of programming assignments with an approach based on well-founded assessment theories (Classical Test Theory (CTT) and Item Response Theory (IRT)) instead of heuristic assessment as in other systems. CTT and/or IRT are used to grade the results of different items of…
Descriptors: Computer Assisted Testing, Grading, Programming, Item Response Theory
Küchemann, Stefan; Malone, Sarah; Edelsbrunner, Peter; Lichtenberger, Andreas; Stern, Elsbeth; Schumacher, Ralph; Brünken, Roland; Vaterlaus, Andreas; Kuhn, Jochen – Physical Review Physics Education Research, 2021
Representational competence is essential for the acquisition of conceptual understanding in physics. It enables the interpretation of diagrams, graphs, and mathematical equations, and relating these to one another as well as to observations and experimental outcomes. In this study, we present the initial validation of a newly developed…
Descriptors: Physics, Science Instruction, Teaching Methods, Concept Formation
Kaya, Elif; O'Grady, Stefan; Kalender, Ilker – Language Testing, 2022
Language proficiency testing serves an important function of classifying examinees into different categories of ability. However, misclassification is to some extent inevitable and may have important consequences for stakeholders. Recent research suggests that classification efficacy may be enhanced substantially using computerized adaptive…
Descriptors: Item Response Theory, Test Items, Language Tests, Classification
Ueno, Maomi; Miyazawa, Yoshimitsu – IEEE Transactions on Learning Technologies, 2018
Over the past few decades, many studies conducted in the field of learning science have described that scaffolding plays an important role in human learning. To scaffold a learner efficiently, a teacher should predict how much support a learner must have to complete tasks and then decide the optimal degree of assistance to support the learner's…
Descriptors: Scaffolding (Teaching Technique), Prediction, Probability, Comparative Analysis
Liu, Sha; Kunnan, Antony John – CALICO Journal, 2016
This study investigated the application of "WriteToLearn" on Chinese undergraduate English majors' essays in terms of its scoring ability and the accuracy of its error feedback. Participants were 163 second-year English majors from a university located in Sichuan province who wrote 326 essays from two writing prompts. Each paper was…
Descriptors: Foreign Countries, Undergraduate Students, English (Second Language), Second Language Learning
Mahmud, Zamalia; Porter, Anne – Indonesian Mathematical Society Journal on Mathematics Education, 2015
Students' understanding of probability concepts have been investigated from various different perspectives. This study was set out to investigate perceived understanding of probability concepts of forty-four students from the STAT131 Understanding Uncertainty and Variation course at the University of Wollongong, NSW. Rasch measurement which is…
Descriptors: Probability, Concept Teaching, Item Response Theory, Computer Assisted Testing
Whittaker, Tiffany A.; Williams, Natasha J.; Dodd, Barbara G. – Educational Assessment, 2011
This study assessed the interpretability of scaled scores based on either number correct (NC) scoring for a paper-and-pencil test or one of two methods of scoring computer-based tests: an item pattern (IP) scoring method and a method based on equated NC scoring. The equated NC scoring method for computer-based tests was proposed as an alternative…
Descriptors: Computer Assisted Testing, Scoring, Test Interpretation, Equated Scores
Forbey, Johnathan D.; Ben-Porath, Yossef S. – Psychological Assessment, 2007
Computerized adaptive testing in personality assessment can improve efficiency by significantly reducing the number of items administered to answer an assessment question. Two approaches have been explored for adaptive testing in computerized personality assessment: item response theory and the countdown method. In this article, the authors…
Descriptors: Personality Traits, Computer Assisted Testing, Test Validity, Personality Assessment
Ferrando, Pere J. – Psicologica: International Journal of Methodology and Experimental Psychology, 2006
This study assessed the hypothesis that the response time to an item increases as the positions of the item and the respondent on the continuum of the trait that is measured draw closer together. This hypothesis has previously been stated by several authors, but so far it does not seem to have been empirically assessed in a rigorous way. A…
Descriptors: Reaction Time, Personality, Effect Size, Item Response Theory
Xu, Yuejin; Iran-Nejad, Asghar; Thoma, Stephen J. – Journal of Interactive Online Learning, 2007
The purpose of the study was to determine comparability of an online version to the original paper-pencil version of Defining Issues Test 2 (DIT2). This study employed methods from both Classical Test Theory (CTT) and Item Response Theory (IRT). Findings from CTT analyses supported the reliability and discriminant validity of both versions.…
Descriptors: Computer Assisted Testing, Test Format, Comparative Analysis, Test Theory
Luk, HingKwan – 1991
This study examined whether an expert system approach involving intelligent selection of items (EXSPRT-I) is as efficient as item response theory (IRT) based three-parameter adaptive mastery testing (AMT) when there are enough subjects to estimate the three IRT item parameters for all items in the test and when subjects in the item parameter…
Descriptors: Achievement Tests, Adaptive Testing, College Entrance Examinations, Comparative Analysis
Previous Page | Next Page »
Pages: 1 | 2