NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 30 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Suzumura, Nana – Language Assessment Quarterly, 2022
The present study is part of a larger mixed methods project that investigated the speaking section of the Advanced Placement (AP) Japanese Language and Culture Exam. It investigated assumptions for the evaluation inference through a content analysis of test taker responses. Results of the content analysis were integrated with those of a many-facet…
Descriptors: Content Analysis, Test Wiseness, Advanced Placement, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Scoular, Claire; Eleftheriadou, Sofia; Ramalingam, Dara; Cloney, Dan – Australian Journal of Education, 2020
Collaboration is a complex skill, comprised of multiple subskills, that is of growing interest to policy makers, educators and researchers. Several definitions and frameworks have been described in the literature to support assessment of collaboration; however, the inherent structure of the construct still needs better definition. In 2015, the…
Descriptors: Cooperative Learning, Problem Solving, Computer Assisted Testing, Comparative Analysis
Bukhari, Nurliyana – ProQuest LLC, 2017
In general, newer educational assessments are deemed more demanding challenges than students are currently prepared to face. Two types of factors may contribute to the test scores: (1) factors or dimensions that are of primary interest to the construct or test domain; and, (2) factors or dimensions that are irrelevant to the construct, causing…
Descriptors: Item Response Theory, Models, Psychometrics, Computer Simulation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Aybek, Eren Can; Demirtasli, R. Nukhet – International Journal of Research in Education and Science, 2017
This article aims to provide a theoretical framework for computerized adaptive tests (CAT) and item response theory models for polytomous items. Besides that, it aims to introduce the simulation and live CAT software to the related researchers. Computerized adaptive test algorithm, assumptions of item response theory models, nominal response…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Choi, Seung W.; Podrabsky, Tracy; McKinney, Natalie – Applied Psychological Measurement, 2012
Computerized adaptive testing (CAT) enables efficient and flexible measurement of latent constructs. The majority of educational and cognitive measurement constructs are based on dichotomous item response theory (IRT) models. An integral part of developing various components of a CAT system is conducting simulations using both known and empirical…
Descriptors: Computer Assisted Testing, Adaptive Testing, Computer Software, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kahraman, Nilüfer – Eurasian Journal of Educational Research, 2014
Problem: Practitioners working with multiple-choice tests have long utilized Item Response Theory (IRT) models to evaluate the performance of test items for quality assurance. The use of similar applications for performance tests, however, is often encumbered due to the challenges encountered in working with complicated data sets in which local…
Descriptors: Item Response Theory, Licensing Examinations (Professions), Performance Based Assessment, Computer Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Wyse, Adam E.; Albano, Anthony D. – Applied Measurement in Education, 2015
This article used several data sets from a large-scale state testing program to examine the feasibility of combining general and modified assessment items in computerized adaptive testing (CAT) for different groups of students. Results suggested that several of the assumptions made when employing this type of mixed-item CAT may not be met for…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Items, Testing Programs
Zheng, Yi; Nozawa, Yuki; Gao, Xiaohong; Chang, Hua-Hua – ACT, Inc., 2012
Multistage adaptive tests (MSTs) have gained increasing popularity in recent years. MST is a balanced compromise between linear test forms (i.e., paper-and-pencil testing and computer-based testing) and traditional item-level computer-adaptive testing (CAT). It combines the advantages of both. On one hand, MST is adaptive (and therefore more…
Descriptors: Adaptive Testing, Heuristics, Accuracy, Item Banks
Peer reviewed Peer reviewed
Direct linkDirect link
Tracey, Terence J. G. – Journal of Career Assessment, 2010
The benefits of computer-assisted assessment via the Internet are well known in interest assessment as it relates information access. Individuals can use their assessment scores to easily access a wealth of career and major information. However, computer-assisted assessment also enables a unique assessment experience for each individual.…
Descriptors: Program Effectiveness, Computer Assisted Testing, Internet, Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Rudner, Lawrence M.; Guo, Fanmin – Journal of Applied Testing Technology, 2011
This study investigates measurement decision theory (MDT) as an underlying model for computer adaptive testing when the goal is to classify examinees into one of a finite number of groups. The first analysis compares MDT with a popular item response theory model and finds little difference in terms of the percentage of correct classifications. The…
Descriptors: Adaptive Testing, Instructional Systems, Item Response Theory, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Quellmalz, Edys S.; Davenport, Jodi L.; Timms, Michael J.; DeBoer, George E.; Jordan, Kevin A.; Huang, Chun-Wei; Buckley, Barbara C. – Journal of Educational Psychology, 2013
How can assessments measure complex science learning? Although traditional, multiple-choice items can effectively measure declarative knowledge such as scientific facts or definitions, they are considered less well suited for providing evidence of science inquiry practices such as making observations or designing and conducting investigations.…
Descriptors: Science Education, Educational Assessment, Psychometrics, Science Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mislevy, Robert J.; Behrens, John T.; Dicerbo, Kristen E.; Levy, Roy – Journal of Educational Data Mining, 2012
"Evidence-centered design" (ECD) is a comprehensive framework for describing the conceptual, computational and inferential elements of educational assessment. It emphasizes the importance of articulating inferences one wants to make and the evidence needed to support those inferences. At first blush, ECD and "educational data…
Descriptors: Educational Assessment, Psychometrics, Evidence, Computer Games
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bulut, Okan; Kan, Adnan – Eurasian Journal of Educational Research, 2012
Problem Statement: Computerized adaptive testing (CAT) is a sophisticated and efficient way of delivering examinations. In CAT, items for each examinee are selected from an item bank based on the examinee's responses to the items. In this way, the difficulty level of the test is adjusted based on the examinee's ability level. Instead of…
Descriptors: Adaptive Testing, Computer Assisted Testing, College Entrance Examinations, Graduate Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Boyer, Kristy Elizabeth, Ed.; Yudelson, Michael, Ed. – International Educational Data Mining Society, 2018
The 11th International Conference on Educational Data Mining (EDM 2018) is held under the auspices of the International Educational Data Mining Society at the Templeton Landing in Buffalo, New York. This year's EDM conference was highly competitive, with 145 long and short paper submissions. Of these, 23 were accepted as full papers and 37…
Descriptors: Data Collection, Data Analysis, Computer Science Education, Program Proposals
Peer reviewed Peer reviewed
Folk, Valerie Greaud; Green, Bert F. – Applied Psychological Measurement, 1989
Some effects of using unidimensional item response theory (IRT) were examined when the assumption of unidimensionality was violated. Adaptive and nonadaptive tests were used. It appears that use of a unidimensional model can bias parameter estimation, adaptive item selection, and ability estimation for the two types of testing. (TJH)
Descriptors: Ability Identification, Adaptive Testing, Computer Assisted Testing, Computer Simulation
Previous Page | Next Page »
Pages: 1  |  2