NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Su, Shiyang – ProQuest LLC, 2017
With the online assessment becoming mainstream and the recording of response times becoming straightforward, the importance of response times as a measure of psychological constructs has been recognized and the literature of modeling times has been growing during the last few decades. Previous studies have tried to formulate models and theories to…
Descriptors: Reading Comprehension, Item Response Theory, Models, Reaction Time
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kahraman, Nilüfer – Eurasian Journal of Educational Research, 2014
Problem: Practitioners working with multiple-choice tests have long utilized Item Response Theory (IRT) models to evaluate the performance of test items for quality assurance. The use of similar applications for performance tests, however, is often encumbered due to the challenges encountered in working with complicated data sets in which local…
Descriptors: Item Response Theory, Licensing Examinations (Professions), Performance Based Assessment, Computer Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Taherbhai, Husein; Seo, Daeryong; Bowman, Trinell – British Educational Research Journal, 2012
Literature in the United States provides many examples of no difference in student achievement when measured against the mode of test administration i.e., paper-pencil and online versions of the test. However, most of these researches centre on "regular" students who do not require differential teaching methods or different evaluation…
Descriptors: Learning Disabilities, Statistical Analysis, Teaching Methods, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L.; Pastor, Dena A.; Kong, Xiaojing J. – Applied Measurement in Education, 2009
Previous research has shown that rapid-guessing behavior can degrade the validity of test scores from low-stakes proficiency tests. This study examined, using hierarchical generalized linear modeling, examinee and item characteristics for predicting rapid-guessing behavior. Several item characteristics were found significant; items with more text…
Descriptors: Guessing (Tests), Achievement Tests, Correlation, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Makransky, Guido; Glas, Cees A. W. – Journal of Applied Testing Technology, 2010
An accurately calibrated item bank is essential for a valid computerized adaptive test. However, in some settings, such as occupational testing, there is limited access to test takers for calibration. As a result of the limited access to possible test takers, collecting data to accurately calibrate an item bank in an occupational setting is…
Descriptors: Foreign Countries, Simulation, Adaptive Testing, Computer Assisted Testing
Millman, Jason – 1978
Test items, all referencing the same instructional objective, are not equally difficult. This investigation attempts to identify some of the determinants of item difficulty within the context of a first course in educational statistics. Computer generated variations of items were used to provide the data. The results were used to investigate the…
Descriptors: Computer Assisted Testing, Content Analysis, Criterion Referenced Tests, Difficulty Level
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Truell, Allen D.; Zhao, Jensen J.; Alexander, Melody W. – Journal of Career and Technical Education, 2005
The purposes of this study were to determine if there is a significant difference in postsecondary business student scores and test completion time based on settable test item exposure control interface format, and to determine if there is a significant difference in student scores and test completion time based on settable test item exposure…
Descriptors: College Students, Scores, Tests, Gender Differences