Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 4 |
Since 2016 (last 10 years) | 14 |
Since 2006 (last 20 years) | 26 |
Descriptor
Computer Assisted Testing | 33 |
Models | 33 |
Psychometrics | 33 |
Test Items | 17 |
Item Response Theory | 15 |
Test Construction | 13 |
Adaptive Testing | 8 |
Comparative Analysis | 7 |
Educational Assessment | 7 |
Measurement Techniques | 7 |
Computer Software | 6 |
More ▼ |
Source
Author
Gierl, Mark J. | 3 |
Lai, Hollis | 3 |
Almond, Russell G. | 1 |
Arendasy, Martin E. | 1 |
Baron, Simon | 1 |
Bejar, Issac I. | 1 |
Bergstrom, Betty | 1 |
Bernard, David | 1 |
Boulais, André-Philippe | 1 |
Boyd, Aimee M. | 1 |
Bukhari, Nurliyana | 1 |
More ▼ |
Publication Type
Journal Articles | 24 |
Reports - Research | 12 |
Reports - Descriptive | 9 |
Reports - Evaluative | 5 |
Dissertations/Theses -… | 4 |
Books | 1 |
Information Analyses | 1 |
Opinion Papers | 1 |
Education Level
Audience
Researchers | 2 |
Practitioners | 1 |
Students | 1 |
Location
Canada | 2 |
France | 1 |
Hong Kong | 1 |
United States | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Hidden Figures Test | 1 |
What Works Clearinghouse Rating
Carol Eckerly; Yue Jia; Paul Jewsbury – ETS Research Report Series, 2022
Testing programs have explored the use of technology-enhanced items alongside traditional item types (e.g., multiple-choice and constructed-response items) as measurement evidence of latent constructs modeled with item response theory (IRT). In this report, we discuss considerations in applying IRT models to a particular type of adaptive testlet…
Descriptors: Computer Assisted Testing, Test Items, Item Response Theory, Scoring
Qunbar, Sa'ed Ali – ProQuest LLC, 2019
This work presents a study that used distributed language representations of test items to model test item difficulty. Distributed language representations are low-dimensional numeric representations of written language inspired and generated by artificial neural network architecture. The research begins with a discussion of the importance of item…
Descriptors: Computer Assisted Testing, Test Items, Difficulty Level, Models
Pásztor, Attila; Magyar, Andrea; Pásztor-Kovács, Anita; Rausch, Attila – Journal of Intelligence, 2022
The aims of the study were (1) to develop a domain-general computer-based assessment tool for inductive reasoning and to empirically test the theoretical models of Klauer and Christou and Papageorgiou; and (2) to develop an online game to foster inductive reasoning through mathematical content and to investigate its effectiveness. The sample was…
Descriptors: Game Based Learning, Logical Thinking, Computer Assisted Testing, Models
Mark Wilson; Kathleen Scalise; Perman Gochyyev – Educational Psychology, 2019
In this article, we describe a software system for assessment development in online learning environments in contexts where there are robust links to cognitive modelling including domain and student modelling. BEAR Assessment System Software (BASS) establishes both a theoretical basis for the domain modelling logic, and offers tools for delivery,…
Descriptors: Computer Software, Electronic Learning, Test Construction, Intelligent Tutoring Systems
Joshua B. Gilbert; James S. Kim; Luke W. Miratrix – Annenberg Institute for School Reform at Brown University, 2022
Analyses that reveal how treatment effects vary allow researchers, practitioners, and policymakers to better understand the efficacy of educational interventions. In practice, however, standard statistical methods for addressing Heterogeneous Treatment Effects (HTE) fail to address the HTE that may exist within outcome measures. In this study, we…
Descriptors: Item Response Theory, Models, Formative Evaluation, Statistical Inference
Nixi Wang – ProQuest LLC, 2022
Measurement errors attributable to cultural issues are complex and challenging for educational assessments. We need assessment tests sensitive to the cultural heterogeneity of populations, and psychometric methods appropriate to address fairness and equity concerns. Built on the research of culturally responsive assessment, this dissertation…
Descriptors: Culturally Relevant Education, Testing, Equal Education, Validity
von Davier, Matthias; Khorramdel, Lale; He, Qiwei; Shin, Hyo Jeong; Chen, Haiwen – Journal of Educational and Behavioral Statistics, 2019
International large-scale assessments (ILSAs) transitioned from paper-based assessments to computer-based assessments (CBAs) facilitating the use of new item types and more effective data collection tools. This allows implementation of more complex test designs and to collect process and response time (RT) data. These new data types can be used to…
Descriptors: International Assessment, Computer Assisted Testing, Psychometrics, Item Response Theory
Storme, Martin; Myszkowski, Nils; Baron, Simon; Bernard, David – Journal of Intelligence, 2019
Assessing job applicants' general mental ability online poses psychometric challenges due to the necessity of having brief but accurate tests. Recent research (Myszkowski & Storme, 2018) suggests that recovering distractor information through Nested Logit Models (NLM; Suh & Bolt, 2010) increases the reliability of ability estimates in…
Descriptors: Intelligence Tests, Item Response Theory, Comparative Analysis, Test Reliability
Bukhari, Nurliyana – ProQuest LLC, 2017
In general, newer educational assessments are deemed more demanding challenges than students are currently prepared to face. Two types of factors may contribute to the test scores: (1) factors or dimensions that are of primary interest to the construct or test domain; and, (2) factors or dimensions that are irrelevant to the construct, causing…
Descriptors: Item Response Theory, Models, Psychometrics, Computer Simulation
Su, Shiyang – ProQuest LLC, 2017
With the online assessment becoming mainstream and the recording of response times becoming straightforward, the importance of response times as a measure of psychological constructs has been recognized and the literature of modeling times has been growing during the last few decades. Previous studies have tried to formulate models and theories to…
Descriptors: Reading Comprehension, Item Response Theory, Models, Reaction Time
Dickison, Philip; Luo, Xiao; Kim, Doyoung; Woo, Ada; Muntean, William; Bergstrom, Betty – Journal of Applied Testing Technology, 2016
Designing a theory-based assessment with sound psychometric qualities to measure a higher-order cognitive construct is a highly desired yet challenging task for many practitioners. This paper proposes a framework for designing a theory-based assessment to measure a higher-order cognitive construct. This framework results in a modularized yet…
Descriptors: Thinking Skills, Cognitive Tests, Test Construction, Nursing
Gierl, Mark J.; Lai, Hollis – Educational Measurement: Issues and Practice, 2016
Testing organization needs large numbers of high-quality items due to the proliferation of alternative test administration methods and modern test designs. But the current demand for items far exceeds the supply. Test items, as they are currently written, evoke a process that is both time-consuming and expensive because each item is written,…
Descriptors: Test Items, Test Construction, Psychometrics, Models
Wolf, Mikyung Kim; Guzman-Orth, Danielle; Lopez, Alexis; Castellano, Katherine; Himelfarb, Igor; Tsutagawa, Fred S. – Educational Assessment, 2016
This article investigates ways to improve the assessment of English learner students' English language proficiency given the current movement of creating next-generation English language proficiency assessments in the Common Core era. In particular, this article discusses the integration of scaffolding strategies, which are prevalently utilized as…
Descriptors: English Language Learners, Scaffolding (Teaching Technique), Language Tests, Language Proficiency
Boyd, Aimee M.; Dodd, Barbara; Fitzpatrick, Steven – Applied Measurement in Education, 2013
This study compared several exposure control procedures for CAT systems based on the three-parameter logistic testlet response theory model (Wang, Bradlow, & Wainer, 2002) and Masters' (1982) partial credit model when applied to a pool consisting entirely of testlets. The exposure control procedures studied were the modified within 0.10 logits…
Descriptors: Computer Assisted Testing, Item Response Theory, Test Construction, Models
Gierl, Mark J.; Lai, Hollis; Pugh, Debra; Touchie, Claire; Boulais, André-Philippe; De Champlain, André – Applied Measurement in Education, 2016
Item development is a time- and resource-intensive process. Automatic item generation integrates cognitive modeling with computer technology to systematically generate test items. To date, however, items generated using cognitive modeling procedures have received limited use in operational testing situations. As a result, the psychometric…
Descriptors: Psychometrics, Multiple Choice Tests, Test Items, Item Analysis