Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 2 |
| Since 2017 (last 10 years) | 6 |
| Since 2007 (last 20 years) | 18 |
Descriptor
| Computer Assisted Testing | 18 |
| Methods | 18 |
| Adaptive Testing | 10 |
| Comparative Analysis | 10 |
| Test Items | 10 |
| Selection | 7 |
| Item Banks | 6 |
| Student Evaluation | 5 |
| Accuracy | 3 |
| Computation | 3 |
| Foreign Countries | 3 |
| More ▼ | |
Source
Author
| Diao, Qi | 3 |
| Dodd, Barbara G. | 2 |
| Hauser, Carl | 2 |
| He, Wei | 2 |
| van der Linden, Wim J. | 2 |
| Abood, Harith | 1 |
| Abu Maizer, Maha | 1 |
| Acres, Kadia | 1 |
| Anakwe, Bridget | 1 |
| Bao, Yu | 1 |
| Bradshaw, Laine | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 18 |
| Journal Articles | 17 |
| Speeches/Meeting Papers | 1 |
Education Level
| Higher Education | 5 |
| Postsecondary Education | 4 |
| Elementary Education | 1 |
| Junior High Schools | 1 |
| Middle Schools | 1 |
| Secondary Education | 1 |
Audience
Location
| Australia | 1 |
| Jordan | 1 |
| United Kingdom | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Finch, W. Holmes – Educational and Psychological Measurement, 2023
Psychometricians have devoted much research and attention to categorical item responses, leading to the development and widespread use of item response theory for the estimation of model parameters and identification of items that do not perform in the same way for examinees from different population subgroups (e.g., differential item functioning…
Descriptors: Test Bias, Item Response Theory, Computation, Methods
Abood, Harith; Abu Maizer, Maha – International Journal of Technology in Education, 2022
This is a descriptive study to investigate one of the most critical issues faced by teachers in evaluating their students' performance at the university level during COVID-19. It aimed to specify the exams' problems faced by the Jordanian universities' teaching staff members, and the strategies they used to face cheating by their students in…
Descriptors: Cheating, Methods, Computer Assisted Testing, Electronic Learning
Soland, James; Kuhfeld, Megan; Rios, Joseph – Large-scale Assessments in Education, 2021
Low examinee effort is a major threat to valid uses of many test scores. Fortunately, several methods have been developed to detect noneffortful item responses, most of which use response times. To accurately identify noneffortful responses, one must set response time thresholds separating those responses from effortful ones. While other studies…
Descriptors: Reaction Time, Measurement, Response Style (Tests), Reading Tests
Bao, Yu; Bradshaw, Laine – Measurement: Interdisciplinary Research and Perspectives, 2018
Diagnostic classification models (DCMs) can provide multidimensional diagnostic feedback about students' mastery levels of knowledge components or attributes. One advantage of using DCMs is the ability to accurately and reliably classify students into mastery levels with a relatively small number of items per attribute. Combining DCMs with…
Descriptors: Test Items, Selection, Adaptive Testing, Computer Assisted Testing
Sahin, Alper; Ozbasi, Durmus – Eurasian Journal of Educational Research, 2017
Purpose: This study aims to reveal effects of content balancing and item selection method on ability estimation in computerized adaptive tests by comparing Fisher's maximum information (FMI) and likelihood weighted information (LWI) methods. Research Methods: Four groups of examinees (250, 500, 750, 1000) and a bank of 500 items with 10 different…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Test Content
Ippolito, Kate; Pazio, Monika – Higher Education Pedagogies, 2019
At the heart of changing institutional assessment and feedback practices is the need to transform university teachers' ways of thinking about feedback and assessment. In this article, we present a case study of a three-year Master's in Education offered to UK STEMM university teachers as an opportunity to develop critically reflective and…
Descriptors: STEM Education, Masters Programs, Graduate Study, College Faculty
He, Wei; Diao, Qi; Hauser, Carl – Educational and Psychological Measurement, 2014
This study compared four item-selection procedures developed for use with severely constrained computerized adaptive tests (CATs). Severely constrained CATs refer to those adaptive tests that seek to meet a complex set of constraints that are often not conclusive to each other (i.e., an item may contribute to the satisfaction of several…
Descriptors: Comparative Analysis, Test Items, Selection, Computer Assisted Testing
Cheng, Ying; Patton, Jeffrey M.; Shao, Can – Educational and Psychological Measurement, 2015
a-Stratified computerized adaptive testing with b-blocking (AST), as an alternative to the widely used maximum Fisher information (MFI) item selection method, can effectively balance item pool usage while providing accurate latent trait estimates in computerized adaptive testing (CAT). However, previous comparisons of these methods have treated…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Item Banks
Yao, Lihua – Journal of Educational Measurement, 2014
The intent of this research was to find an item selection procedure in the multidimensional computer adaptive testing (CAT) framework that yielded higher precision for both the domain and composite abilities, had a higher usage of the item pool, and controlled the exposure rate. Five multidimensional CAT item selection procedures (minimum angle;…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection
Diao, Qi; van der Linden, Wim J. – Applied Psychological Measurement, 2013
Automated test assembly uses the methodology of mixed integer programming to select an optimal set of items from an item bank. Automated test-form generation uses the same methodology to optimally order the items and format the test form. From an optimization point of view, production of fully formatted test forms directly from the item pool using…
Descriptors: Automation, Test Construction, Test Format, Item Banks
He, Wei; Diao, Qi; Hauser, Carl – Online Submission, 2013
This study compares the four existing procedures handling the item selection in severely constrained computerized adaptive tests (CAT). These procedures include weighted deviation model (WDM), weighted penalty model (WPM), maximum priority index (MPI), and shadow test approach (STA). Severely constrained CAT refer to those adaptive tests seeking…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Item Banks
Sheridan, Lynnaire; Kotevski, Suzanne; Dean, Bonnie Amelia – Asia-Pacific Journal of Cooperative Education, 2014
Reflective practice is an important lifelong skill for business professionals. In the work integrated learning (WIL) curriculum, supporting interns' development of reflective practice is critical to their experience in WIL as well as their transition into professional practice. The purpose of this paper is to explore students' perceptions on the…
Descriptors: Student Attitudes, Evaluation, Methods, Student Evaluation
Ho, Tsung-Han; Dodd, Barbara G. – Applied Measurement in Education, 2012
In this study we compared five item selection procedures using three ability estimation methods in the context of a mixed-format adaptive test based on the generalized partial credit model. The item selection procedures used were maximum posterior weighted information, maximum expected information, maximum posterior weighted Kullback-Leibler…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection
Moyer, Eric L.; Galindo, Jennifer L.; Dodd, Barbara G. – Educational and Psychological Measurement, 2012
Managing test specifications--both multiple nonstatistical constraints and flexibly defined constraints--has become an important part of designing item selection procedures for computerized adaptive tests (CATs) in achievement testing. This study compared the effectiveness of three procedures: constrained CAT, flexible modified constrained CAT,…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Items, Item Analysis
Newton, Caroline; Acres, Kadia; Bruce, Carolyn – American Journal of Speech-Language Pathology, 2013
Purpose: This study investigated whether computers are a useful tool in the assessment of people with aphasia (PWA). Computerized and traditionally administered versions of tasks were compared to determine whether (a) the scores were equivalent, (b) the administration was comparable, (c) variables such as age affected performance, and (d) the…
Descriptors: Language Tests, Computer Assisted Testing, Questionnaires, Aphasia
Previous Page | Next Page ยป
Pages: 1 | 2
Peer reviewed
Direct link
