Publication Date
| In 2026 | 0 |
| Since 2025 | 25 |
| Since 2022 (last 5 years) | 131 |
| Since 2017 (last 10 years) | 263 |
| Since 2007 (last 20 years) | 492 |
Descriptor
| Computer Assisted Testing | 1110 |
| Test Construction | 1110 |
| Test Items | 385 |
| Adaptive Testing | 274 |
| Foreign Countries | 233 |
| Test Validity | 196 |
| Item Banks | 194 |
| Higher Education | 165 |
| Evaluation Methods | 147 |
| Test Format | 146 |
| Test Reliability | 142 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Audience
| Researchers | 49 |
| Practitioners | 36 |
| Teachers | 21 |
| Administrators | 5 |
| Policymakers | 5 |
| Counselors | 2 |
| Media Staff | 1 |
| Support Staff | 1 |
Location
| Australia | 18 |
| Canada | 16 |
| Taiwan | 13 |
| Turkey | 13 |
| Spain | 12 |
| United Kingdom | 12 |
| Germany | 11 |
| Indonesia | 10 |
| Oregon | 10 |
| China | 9 |
| United States | 9 |
| More ▼ | |
Laws, Policies, & Programs
| Individuals with Disabilities… | 10 |
| No Child Left Behind Act 2001 | 5 |
| Elementary and Secondary… | 1 |
| Elementary and Secondary… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
| Does not meet standards | 1 |
Diao, Qi; van der Linden, Wim J. – Applied Psychological Measurement, 2013
Automated test assembly uses the methodology of mixed integer programming to select an optimal set of items from an item bank. Automated test-form generation uses the same methodology to optimally order the items and format the test form. From an optimization point of view, production of fully formatted test forms directly from the item pool using…
Descriptors: Automation, Test Construction, Test Format, Item Banks
Boyd, Aimee M.; Dodd, Barbara; Fitzpatrick, Steven – Applied Measurement in Education, 2013
This study compared several exposure control procedures for CAT systems based on the three-parameter logistic testlet response theory model (Wang, Bradlow, & Wainer, 2002) and Masters' (1982) partial credit model when applied to a pool consisting entirely of testlets. The exposure control procedures studied were the modified within 0.10 logits…
Descriptors: Computer Assisted Testing, Item Response Theory, Test Construction, Models
Gierl, Mark J.; Lai, Hollis; Pugh, Debra; Touchie, Claire; Boulais, André-Philippe; De Champlain, André – Applied Measurement in Education, 2016
Item development is a time- and resource-intensive process. Automatic item generation integrates cognitive modeling with computer technology to systematically generate test items. To date, however, items generated using cognitive modeling procedures have received limited use in operational testing situations. As a result, the psychometric…
Descriptors: Psychometrics, Multiple Choice Tests, Test Items, Item Analysis
Yagci, Mustafa; Ünal, Menderes – Online Submission, 2014
A design and application of adaptive online exam system are carried out in this paper. Adaptive exam systems determine different question sets automatically and interactively for each student and measure their competence on a certain area of discipline instead of comparing their gains with each other. Through an adaptive exam technique, a…
Descriptors: Adaptive Testing, Expertise, Information Security, Databases
Thissen, David – Journal of Educational and Behavioral Statistics, 2016
David Thissen, a professor in the Department of Psychology and Neuroscience, Quantitative Program at the University of North Carolina, has consulted and served on technical advisory committees for assessment programs that use item response theory (IRT) over the past couple decades. He has come to the conclusion that there are usually two purposes…
Descriptors: Item Response Theory, Test Construction, Testing Problems, Student Evaluation
Kebble, Paul Graham – The EUROCALL Review, 2016
The C-Test as a tool for assessing language competence has been in existence for nearly 40 years, having been designed by Professors Klein-Braley and Raatz for implementation in German and English. Much research has been conducted over the ensuing years, particularly in regards to reliability and construct validity, for which it is reported to…
Descriptors: Language Tests, Computer Software, Test Construction, Test Reliability
Martin, Michael O., Ed.; Mullis, Ina V. S., Ed.; Hooper, Martin, Ed. – International Association for the Evaluation of Educational Achievement, 2017
"Methods and Procedures in PIRLS 2016" documents the development of the Progress in International Reading Literacy Study (PIRLS) assessments and questionnaires and describes the methods used in sampling, translation verification, data collection, database construction, and the construction of the achievement and context questionnaire…
Descriptors: Foreign Countries, Achievement Tests, Grade 4, International Assessment
Petway, Kevin T., II; Rikoon, Samuel H.; Brenneman, Meghan W.; Burrus, Jeremy; Roberts, Richard D. – ETS Research Report Series, 2016
The Mission Skills Assessment (MSA) is an online assessment that targets 6 noncognitive constructs: creativity, curiosity, ethics, resilience, teamwork, and time management. Each construct is measured by means of a student self-report scale, a student alternative scale (e.g., situational judgment test), and a teacher report scale. Use of the MSA…
Descriptors: Test Construction, Computer Assisted Testing, Creativity, Imagination
Wagemaker, Hans, Ed. – International Association for the Evaluation of Educational Achievement, 2020
Although International Association for the Evaluation of Educational Achievement-pioneered international large-scale assessment (ILSA) of education is now a well-established science, non-practitioners and many users often substantially misunderstand how large-scale assessments are conducted, what questions and challenges they are designed to…
Descriptors: International Assessment, Achievement Tests, Educational Assessment, Comparative Analysis
Briggs, Linda L. – Campus Technology, 2013
When it comes to secure testing online, even high-tech solutions rely on an old standby: a human proctor. This article asks the question: Is such an approach sustainable in the long run? A student labors over a midterm exam while a vigilant proctor peers over his shoulder, watching for any sign of cheating. It sounds like a tableau from a century…
Descriptors: Cheating, Computer Assisted Testing, Student Evaluation, Evaluation Methods
Golovachyova, Viktoriya N.; Menlibekova, Gulbakhyt Zh.; Abayeva, Nella F.; Ten, Tatyana L.; Kogaya, Galina D. – International Journal of Environmental and Science Education, 2016
Using computer-based monitoring systems that rely on tests could be the most effective way of knowledge evaluation. The problem of objective knowledge assessment by means of testing takes on a new dimension in the context of new paradigms in education. The analysis of the existing test methods enabled us to conclude that tests with selected…
Descriptors: Expertise, Computer Assisted Testing, Student Evaluation, Knowledge Level
He, Lianzhen; Min, Shangchao – Language Assessment Quarterly, 2017
The first aim of this study was to develop a computer adaptive EFL test (CALT) that assesses test takers' listening and reading proficiency in English with dichotomous items and polytomous testlets. We reported in detail on the development of the CALT, including item banking, determination of suitable item response theory (IRT) models for item…
Descriptors: Computer Assisted Testing, Adaptive Testing, English (Second Language), Second Language Learning
Kunina-Habenicht, Olga; Hautz, Wolf E.; Knigge, Michel; Spies, Claudia; Ahlers, Olaf – Advances in Health Sciences Education, 2015
Clinical reasoning is an essential competency in medical education. This study aimed at developing and validating a test to assess diagnostic accuracy, collected information, and diagnostic decision time in clinical reasoning. A norm-referenced computer-based test for the assessment of clinical reasoning (ASCLIRE) was developed, integrating the…
Descriptors: Logical Thinking, Cognitive Tests, Test Construction, Test Validity
Kleinhans, Janne; Schumann, Matthias – Interactive Technology and Smart Education, 2015
Purpose: This paper investigates the potential of computerized adaptive testing for CMs to reduce test time. In the context of education and training, competency measurement (CM) is a central challenge in competency management. For complex CMs, a compromise must be addressed between the time available and the quality of the measurements.…
Descriptors: Computer Assisted Testing, Educational Technology, Time, Measurement
Garcia Laborda, Jesus; Magal Royo, Teresa; Otero de Juan, Nuria; Gimenez Lopez, Jose L. – Online Submission, 2015
Assessing speaking is one of the most difficult tasks in computer based language testing. Many countries all over the world face the need to implement standardized language tests where speaking tasks are commonly included. However, a number of problems make them rather impractical such as the costs, the personnel involved, the length of time for…
Descriptors: Test Construction, Telecommunications, Computer Mediated Communication, Computer Assisted Testing

Peer reviewed
Direct link
