Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 8 |
Since 2006 (last 20 years) | 17 |
Descriptor
Computer Assisted Testing | 19 |
Factor Analysis | 19 |
Item Response Theory | 19 |
Test Items | 8 |
Comparative Analysis | 7 |
Adaptive Testing | 6 |
Models | 6 |
Scores | 6 |
Correlation | 5 |
College Entrance Examinations | 4 |
Evaluation Methods | 4 |
More ▼ |
Source
Author
Seo, Dong Gi | 2 |
Atwood, Kristin | 1 |
Bejar, Isaac | 1 |
Berberoglu, Giray | 1 |
Bishop, Kyoungwon | 1 |
Buckley, Barbara C. | 1 |
Cho, YoungWoo | 1 |
Coster, Wendy J. | 1 |
Davenport, Jodi L. | 1 |
De Champlain, Andre | 1 |
DeBoer, George E. | 1 |
More ▼ |
Publication Type
Journal Articles | 13 |
Reports - Research | 13 |
Numerical/Quantitative Data | 2 |
Reports - Evaluative | 2 |
Books | 1 |
Collected Works - Proceedings | 1 |
Dissertations/Theses -… | 1 |
Reports - Descriptive | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Audience
Researchers | 2 |
Practitioners | 1 |
Students | 1 |
Laws, Policies, & Programs
Assessments and Surveys
ACT Assessment | 2 |
Graduate Record Examinations | 1 |
Law School Admission Test | 1 |
Pediatric Evaluation of… | 1 |
Program for International… | 1 |
What Works Clearinghouse Rating
Markus T. Jansen; Ralf Schulze – Educational and Psychological Measurement, 2024
Thurstonian forced-choice modeling is considered to be a powerful new tool to estimate item and person parameters while simultaneously testing the model fit. This assessment approach is associated with the aim of reducing faking and other response tendencies that plague traditional self-report trait assessments. As a result of major recent…
Descriptors: Factor Analysis, Models, Item Analysis, Evaluation Methods
Min, Shangchao; Bishop, Kyoungwon; Gary Cook, Howard – Language Testing, 2022
This study explored the interplay between content knowledge and reading ability in a large-scale multistage adaptive English for academic purposes (EAP) reading assessment at a range of ability levels across 1-12 graders. The datasets for this study were item-level responses to the reading tests of ACCESS for ELLs Online 2.0. A sample of 10,000…
Descriptors: Item Response Theory, English Language Learners, Correlation, Reading Ability
Steedle, Jeffrey; Pashley, Peter; Cho, YoungWoo – ACT, Inc., 2020
Three mode comparability studies were conducted on the following Saturday national ACT test dates: October 26, 2019, December 14, 2019, and February 8, 2020. The primary goal of these studies was to evaluate whether ACT scores exhibited mode effects between paper and online testing that would necessitate statistical adjustments to the online…
Descriptors: Test Format, Computer Assisted Testing, College Entrance Examinations, Scores
Robin, Frédéric; Bejar, Isaac; Liang, Longjuan; Rijmen, Frank – ETS Research Report Series, 2016
Exploratory and confirmatory factor analyses of domestic data from the" GRE"® revised General Test, introduced in 2011, were conducted separately for the verbal (VBL) and quantitative (QNT) reasoning measures to evaluate the unidimensionality and local independence assumptions required by item response theory (IRT). Results based on data…
Descriptors: College Entrance Examinations, Graduate Study, Verbal Tests, Mathematics Tests
Kalender, Ilker; Berberoglu, Giray – Educational Sciences: Theory and Practice, 2017
Admission into university in Turkey is very competitive and features a number of practical problems regarding not only the test administration process itself, but also concerning the psychometric properties of test scores. Computerized adaptive testing (CAT) is seen as a possible alternative approach to solve these problems. In the first phase of…
Descriptors: Foreign Countries, Computer Assisted Testing, College Admission, Simulation
Wei, Wei; Zheng, Ying – Computer Assisted Language Learning, 2017
This research provided a comprehensive evaluation and validation of the listening section of a newly introduced computerised test, Pearson Test of English Academic (PTE Academic). PTE Academic contains 11 item types assessing academic listening skills either alone or in combination with other skills. First, task analysis helped identify skills…
Descriptors: Listening Comprehension Tests, Computer Assisted Testing, Language Tests, Construct Validity
Li, Dongmei; Yi, Qing; Harris, Deborah – ACT, Inc., 2017
In preparation for online administration of the ACT® test, ACT conducted studies to examine the comparability of scores between online and paper administrations, including a timing study in fall 2013, a mode comparability study in spring 2014, and a second mode comparability study in spring 2015. This report presents major findings from these…
Descriptors: College Entrance Examinations, Computer Assisted Testing, Comparative Analysis, Test Format
Coster, Wendy J.; Kramer, Jessica M.; Tian, Feng; Dooley, Meghan; Liljenquist, Kendra; Kao, Ying-Chia; Ni, Pengsheng – Autism: The International Journal of Research and Practice, 2016
The Pediatric Evaluation of Disability Inventory-Computer Adaptive Test is an alternative method for describing the adaptive function of children and youth with disabilities using a computer-administered assessment. This study evaluated the performance of the Pediatric Evaluation of Disability Inventory-Computer Adaptive Test with a national…
Descriptors: Autism, Pervasive Developmental Disorders, Computer Assisted Testing, Adaptive Testing
Han, Kyung T.; Guo, Fanmin – Practical Assessment, Research & Evaluation, 2014
The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…
Descriptors: Maximum Likelihood Statistics, Structural Equation Models, Data, Computer Assisted Testing
Seo, Dong Gi; Weiss, David J. – Educational and Psychological Measurement, 2015
Most computerized adaptive tests (CATs) have been studied using the framework of unidimensional item response theory. However, many psychological variables are multidimensional and might benefit from using a multidimensional approach to CATs. This study investigated the accuracy, fidelity, and efficiency of a fully multidimensional CAT algorithm…
Descriptors: Computer Assisted Testing, Adaptive Testing, Accuracy, Fidelity
Wan, Lei; Henly, George A. – Applied Measurement in Education, 2012
Many innovative item formats have been proposed over the past decade, but little empirical research has been conducted on their measurement properties. This study examines the reliability, efficiency, and construct validity of two innovative item formats--the figural response (FR) and constructed response (CR) formats used in a K-12 computerized…
Descriptors: Test Items, Test Format, Computer Assisted Testing, Measurement
Quellmalz, Edys S.; Davenport, Jodi L.; Timms, Michael J.; DeBoer, George E.; Jordan, Kevin A.; Huang, Chun-Wei; Buckley, Barbara C. – Journal of Educational Psychology, 2013
How can assessments measure complex science learning? Although traditional, multiple-choice items can effectively measure declarative knowledge such as scientific facts or definitions, they are considered less well suited for providing evidence of science inquiry practices such as making observations or designing and conducting investigations.…
Descriptors: Science Education, Educational Assessment, Psychometrics, Science Tests
Seo, Dong Gi – ProQuest LLC, 2011
Most computerized adaptive tests (CAT) have been studied under the framework of unidimensional item response theory. However, many psychological variables are multidimensional and might benefit from using a multidimensional approach to CAT. In addition, a number of psychological variables (e.g., quality of life, depression) can be conceptualized…
Descriptors: Test Length, Quality of Life, Item Analysis, Geometric Concepts
Wainer, Howard – Journal of Educational and Behavioral Statistics, 2010
In this essay, the author tries to look forward into the 21st century to divine three things: (i) What skills will researchers in the future need to solve the most pressing problems? (ii) What are some of the most likely candidates to be those problems? and (iii) What are some current areas of research that seem mined out and should not distract…
Descriptors: Research Skills, Researchers, Internet, Access to Information
Nering, Michael L., Ed.; Ostini, Remo, Ed. – Routledge, Taylor & Francis Group, 2010
This comprehensive "Handbook" focuses on the most used polytomous item response theory (IRT) models. These models help us understand the interaction between examinees and test questions where the questions have various response categories. The book reviews all of the major models and includes discussions about how and where the models…
Descriptors: Guides, Item Response Theory, Test Items, Correlation
Previous Page | Next Page »
Pages: 1 | 2