Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 11 |
Since 2006 (last 20 years) | 34 |
Descriptor
Computer Assisted Testing | 40 |
Item Response Theory | 40 |
Statistical Analysis | 40 |
Test Items | 17 |
Adaptive Testing | 14 |
Comparative Analysis | 12 |
Simulation | 11 |
Foreign Countries | 10 |
Scores | 10 |
Computation | 7 |
Test Construction | 7 |
More ▼ |
Source
Author
Choi, Seung W. | 3 |
Sinharay, Sandip | 3 |
Biancarosa, Gina | 2 |
Carlson, Sarah E. | 2 |
Davison, Mark L. | 2 |
Kim, Dong-In | 2 |
Liu, Bowen | 2 |
Seipel, Ben | 2 |
Wan, Ping | 2 |
Abayeva, Nella F. | 1 |
Ainley, John | 1 |
More ▼ |
Publication Type
Education Level
Elementary Education | 10 |
Higher Education | 9 |
Postsecondary Education | 6 |
Grade 6 | 4 |
Middle Schools | 4 |
Elementary Secondary Education | 3 |
Secondary Education | 3 |
Grade 1 | 2 |
Grade 5 | 2 |
Grade 8 | 2 |
Intermediate Grades | 2 |
More ▼ |
Audience
Location
Australia | 2 |
California | 1 |
Finland | 1 |
France | 1 |
Georgia | 1 |
Hungary | 1 |
Indonesia | 1 |
Netherlands | 1 |
Russia | 1 |
Taiwan | 1 |
Turkey | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Indiana Statewide Testing for… | 2 |
ACT Assessment | 1 |
Defining Issues Test | 1 |
Graduate Record Examinations | 1 |
Program for International… | 1 |
Raven Progressive Matrices | 1 |
United States Medical… | 1 |
What Works Clearinghouse Rating
Sinharay, Sandip – Journal of Educational and Behavioral Statistics, 2016
Meijer and van Krimpen-Stoop noted that the number of person-fit statistics (PFSs) that have been designed for computerized adaptive tests (CATs) is relatively modest. This article partially addresses that concern by suggesting three new PFSs for CATs. The statistics are based on tests for a change point and can be used to detect an abrupt change…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Goodness of Fit
Davison, Mark L.; Biancarosa, Gina; Carlson, Sarah E.; Seipel, Ben; Liu, Bowen – Assessment for Effective Intervention, 2018
The computer-administered Multiple-Choice Online Causal Comprehension Assessment (MOCCA) for Grades 3 to 5 has an innovative, 40-item multiple-choice structure in which each distractor corresponds to a comprehension process upon which poor comprehenders have been shown to rely. This structure requires revised thinking about measurement issues…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Pilot Projects, Measurement
Davison, Mark L.; Biancarosa, Gina; Carlson, Sarah E.; Seipel, Ben; Liu, Bowen – Grantee Submission, 2018
The computer-administered Multiple-Choice Online Causal Comprehension Assessment (MOCCA) for Grades 3 to 5 has an innovative, 40-item multiple-choice structure in which each distractor corresponds to a comprehension process upon which poor comprehenders have been shown to rely. This structure requires revised thinking about measurement issues…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Pilot Projects, Measurement
Wang, Jing-Ru; Chen, Shin-Feng – International Journal of Science and Mathematics Education, 2016
This article reports on the development of an online dynamic approach for assessing and improving students' reading comprehension of science texts--the dynamic assessment for reading comprehension of science text (DARCST). The DARCST blended assessment and response-specific instruction into a holistic learning task for grades 5 and 6 students. The…
Descriptors: Computer Assisted Testing, Reading Comprehension, Science Instruction, Grade 5
Dorozhkin, Evgenij M.; Chelyshkova, Marina B.; Malygin, Alexey A.; Toymentseva, Irina A.; Anopchenko, Tatiana Y. – International Journal of Environmental and Science Education, 2016
The relevance of the investigated problem is determined by the need to improving the evaluation procedures in education and the student assessment in the age of the context of education widening, new modes of study developing (such as blending learning, e-learning, massive open online courses), immediate feedback necessity, reliable and valid…
Descriptors: Student Evaluation, Evaluation Methods, Item Response Theory, Mathematical Models
Sinharay, Sandip; Wan, Ping; Choi, Seung W.; Kim, Dong-In – Journal of Educational Measurement, 2015
With an increase in the number of online tests, the number of interruptions during testing due to unexpected technical issues seems to be on the rise. For example, interruptions occurred during several recent state tests. When interruptions occur, it is important to determine the extent of their impact on the examinees' scores. Researchers such as…
Descriptors: Computer Assisted Testing, Testing Problems, Scores, Statistical Analysis
Sinharay, Sandip; Wan, Ping; Whitaker, Mike; Kim, Dong-In; Zhang, Litong; Choi, Seung W. – Journal of Educational Measurement, 2014
With an increase in the number of online tests, interruptions during testing due to unexpected technical issues seem unavoidable. For example, interruptions occurred during several recent state tests. When interruptions occur, it is important to determine the extent of their impact on the examinees' scores. There is a lack of research on this…
Descriptors: Computer Assisted Testing, Testing Problems, Scores, Regression (Statistics)
Wei, Wei; Zheng, Ying – Computer Assisted Language Learning, 2017
This research provided a comprehensive evaluation and validation of the listening section of a newly introduced computerised test, Pearson Test of English Academic (PTE Academic). PTE Academic contains 11 item types assessing academic listening skills either alone or in combination with other skills. First, task analysis helped identify skills…
Descriptors: Listening Comprehension Tests, Computer Assisted Testing, Language Tests, Construct Validity
Hardcastle, Joseph; Herrmann-Abell, Cari F.; DeBoer, George E. – Grantee Submission, 2017
Can student performance on computer-based tests (CBT) and paper-and-pencil tests (PPT) be considered equivalent measures of student knowledge? States and school districts are grappling with this question, and although studies addressing this question are growing, additional research is needed. We report on the performance of students who took…
Descriptors: Academic Achievement, Computer Assisted Testing, Comparative Analysis, Student Evaluation
Li, Dongmei; Yi, Qing; Harris, Deborah – ACT, Inc., 2017
In preparation for online administration of the ACT® test, ACT conducted studies to examine the comparability of scores between online and paper administrations, including a timing study in fall 2013, a mode comparability study in spring 2014, and a second mode comparability study in spring 2015. This report presents major findings from these…
Descriptors: College Entrance Examinations, Computer Assisted Testing, Comparative Analysis, Test Format
Retnawati, Heri – Turkish Online Journal of Educational Technology - TOJET, 2015
This study aimed to compare the accuracy of the test scores as results of Test of English Proficiency (TOEP) based on paper and pencil test (PPT) versus computer-based test (CBT). Using the participants' responses to the PPT documented from 2008-2010 and data of CBT TOEP documented in 2013-2014 on the sets of 1A, 2A, and 3A for the Listening and…
Descriptors: Scores, Accuracy, Computer Assisted Testing, English (Second Language)
Golovachyova, Viktoriya N.; Menlibekova, Gulbakhyt Zh.; Abayeva, Nella F.; Ten, Tatyana L.; Kogaya, Galina D. – International Journal of Environmental and Science Education, 2016
Using computer-based monitoring systems that rely on tests could be the most effective way of knowledge evaluation. The problem of objective knowledge assessment by means of testing takes on a new dimension in the context of new paradigms in education. The analysis of the existing test methods enabled us to conclude that tests with selected…
Descriptors: Expertise, Computer Assisted Testing, Student Evaluation, Knowledge Level
Kahraman, Nilüfer – Eurasian Journal of Educational Research, 2014
Problem: Practitioners working with multiple-choice tests have long utilized Item Response Theory (IRT) models to evaluate the performance of test items for quality assurance. The use of similar applications for performance tests, however, is often encumbered due to the challenges encountered in working with complicated data sets in which local…
Descriptors: Item Response Theory, Licensing Examinations (Professions), Performance Based Assessment, Computer Simulation
Ainley, John; Fraillon, Julian; Schulz, Wolfram; Gebhardt, Eveline – Applied Measurement in Education, 2016
The development of information technologies has transformed the environment in which young people access, create, and share information. Many countries, having recognized the imperative of digital technology, acknowledge the need to educate young people in the use of these technologies so as to underpin economic and social benefits. This article…
Descriptors: Cross Cultural Studies, Information Literacy, Computer Literacy, Grade 8
Mahmud, Zamalia; Porter, Anne – Indonesian Mathematical Society Journal on Mathematics Education, 2015
Students' understanding of probability concepts have been investigated from various different perspectives. This study was set out to investigate perceived understanding of probability concepts of forty-four students from the STAT131 Understanding Uncertainty and Variation course at the University of Wollongong, NSW. Rasch measurement which is…
Descriptors: Probability, Concept Teaching, Item Response Theory, Computer Assisted Testing