NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 121 to 135 of 1,337 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
van der Linden, Wim J.; Ren, Hao – Journal of Educational and Behavioral Statistics, 2020
The Bayesian way of accounting for the effects of error in the ability and item parameters in adaptive testing is through the joint posterior distribution of all parameters. An optimized Markov chain Monte Carlo algorithm for adaptive testing is presented, which samples this distribution in real time to score the examinee's ability and optimally…
Descriptors: Bayesian Statistics, Adaptive Testing, Error of Measurement, Markov Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Hula, William D.; Fergadiotis, Gerasimos; Swiderski, Alexander M.; Silkes, JoAnn P.; Kellough, Stacey – Journal of Speech, Language, and Hearing Research, 2020
Purpose: The purpose of this study was to verify the equivalence of 2 alternate test forms with nonoverlapping content generated by an item response theory (IRT)--based computer-adaptive test (CAT). The Philadelphia Naming Test (PNT; Roach, Schwartz, Martin, Grewal, & Brecher, 1996) was utilized as an item bank in a prospective, independent…
Descriptors: Adaptive Testing, Computer Assisted Testing, Severity (of Disability), Aphasia
Peer reviewed Peer reviewed
Direct linkDirect link
Laamanen, Merja; Ladonlahti, Tarja; Uotinen, Sanna; Okada, Alexandra; Bañeres, David; Koçdar, Serpil – International Journal of Educational Technology in Higher Education, 2021
Trust-based e-assessment systems are increasingly important in the digital age for both academic institutions and students, including students with special educational needs and disabilities (SEND). Recent literature indicates a growing number of studies about e-authentication and authorship verification for quality assurance with more flexible…
Descriptors: Higher Education, College Students, Student Attitudes, Special Needs Students
Olney, Andrew M.; Gilbert, Stephen B.; Rivers, Kelly – Grantee Submission, 2021
Cyberlearning technologies increasingly seek to offer personalized learning experiences via adaptive systems that customize pedagogy, content, feedback, pace, and tone according to the just-in-time needs of a learner. However, it is historically difficult to: (1) create these smart learning environments; (2) continuously improve them based on…
Descriptors: Educational Technology, Computer Assisted Instruction, Learning Analytics, Intelligent Tutoring Systems
Stefan Lorenz – ProQuest LLC, 2024
This dissertation develops and applies sophisticated Item Response Theory (IRT) methods to address fundamental measurement challenges in cognitive testing, focusing on the Armed Services Vocational Aptitude Battery (ASVAB) data from the National Longitudinal Survey of Youth (NLSY). The first chapter implements a confirmatory multidimensional IRT…
Descriptors: Human Capital, Item Response Theory, Vocational Aptitude, Armed Forces
Peer reviewed Peer reviewed
Direct linkDirect link
Angelone, Anna Maria; Galassi, Alessandra; Vittorini, Pierpaolo – International Journal of Learning Technology, 2022
The adoption of computerised adaptive testing (CAT) instead of classical testing (FIT) raises questions from both teachers' and students' perspectives. The scientific literature shows that teachers using CAT instead of FIT should experience shorter times to complete the assessment and obtain more precise evaluations. As for the students, adaptive…
Descriptors: Adaptive Testing, Computer Assisted Testing, College Freshmen, Student Attitudes
Peer reviewed Peer reviewed
Direct linkDirect link
Ethan R. Van Norman; Emily R. Forcht – Journal of Education for Students Placed at Risk, 2024
This study evaluated the forecasting accuracy of trend estimation methods applied to time-series data from computer adaptive tests (CATs). Data were collected roughly once a month over the course of a school year. We evaluated the forecasting accuracy of two regression-based growth estimation methods (ordinary least squares and Theil-Sen). The…
Descriptors: Data Collection, Predictive Measurement, Predictive Validity, Predictor Variables
Peer reviewed Peer reviewed
Direct linkDirect link
Han, Kyung T.; Dimitrov, Dimiter M.; Al-Mashary, Faisal – Educational and Psychological Measurement, 2019
The "D"-scoring method for scoring and equating tests with binary items proposed by Dimitrov offers some of the advantages of item response theory, such as item-level difficulty information and score computation that reflects the item difficulties, while retaining the merits of classical test theory such as the simplicity of number…
Descriptors: Test Construction, Scoring, Test Items, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Soland, James; Kuhfeld, Megan; Rios, Joseph – Large-scale Assessments in Education, 2021
Low examinee effort is a major threat to valid uses of many test scores. Fortunately, several methods have been developed to detect noneffortful item responses, most of which use response times. To accurately identify noneffortful responses, one must set response time thresholds separating those responses from effortful ones. While other studies…
Descriptors: Reaction Time, Measurement, Response Style (Tests), Reading Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Jewsbury, Paul A.; van Rijn, Peter W. – Journal of Educational and Behavioral Statistics, 2020
In large-scale educational assessment data consistent with a simple-structure multidimensional item response theory (MIRT) model, where every item measures only one latent variable, separate unidimensional item response theory (UIRT) models for each latent variable are often calibrated for practical reasons. While this approach can be valid for…
Descriptors: Item Response Theory, Computation, Test Items, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Chun Wang; Ping Chen; Shengyu Jiang – Journal of Educational Measurement, 2020
Many large-scale educational surveys have moved from linear form design to multistage testing (MST) design. One advantage of MST is that it can provide more accurate latent trait [theta] estimates using fewer items than required by linear tests. However, MST generates incomplete response data by design; hence, questions remain as to how to…
Descriptors: Test Construction, Test Items, Adaptive Testing, Maximum Likelihood Statistics
Goodwin, Amanda P.; Petscher, Yaacov; Tock, Jamie; McFadden, Sara; Reynolds, Dan; Lantos, Tess; Jones, Sara – Assessment for Effective Intervention, 2022
Assessment of language skills for upper elementary and middle schoolers is important due to the strong link between language and reading comprehension. Yet, currently few practical, reliable, valid, and instructionally informative assessments of language exist. This study provides validation evidence for Monster, P.I., which is a gamified,…
Descriptors: Adaptive Testing, Computer Assisted Testing, Language Tests, Vocabulary
Peer reviewed Peer reviewed
Direct linkDirect link
Cui, Zhongmin; Liu, Chunyan; He, Yong; Chen, Hanwei – Journal of Educational Measurement, 2018
Allowing item review in computerized adaptive testing (CAT) is getting more attention in the educational measurement field as more and more testing programs adopt CAT. The research literature has shown that allowing item review in an educational test could result in more accurate estimates of examinees' abilities. The practice of item review in…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Test Wiseness
Peer reviewed Peer reviewed
Direct linkDirect link
Luo, Xiao; Wang, Xinrui – International Journal of Testing, 2019
This study introduced dynamic multistage testing (dy-MST) as an improvement to existing adaptive testing methods. dy-MST combines the advantages of computerized adaptive testing (CAT) and computerized adaptive multistage testing (ca-MST) to create a highly efficient and regulated adaptive testing method. In the test construction phase, multistage…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Construction, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Gandhi, S.; Hema, G. – Journal of Educational Technology, 2019
The computer based tests are capable of putting together a lot of interactions and fascinating question types, such as simulations, online tests, and measurement of skills, rather than simply assessing by paper-pencil tests. The computerized test result has greater standardization of test administration. The aim of this study is to seek out the…
Descriptors: Computer Assisted Testing, Adaptive Testing, Undergraduate Students, Foreign Countries
Pages: 1  |  ...  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  12  |  13  |  ...  |  90