Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 6 |
Descriptor
Computer Assisted Testing | 8 |
Responses | 8 |
Scores | 8 |
Test Items | 5 |
Comparative Analysis | 3 |
Evaluation Methods | 3 |
Achievement Gains | 2 |
Correlation | 2 |
Foreign Countries | 2 |
Guessing (Tests) | 2 |
Hypothesis Testing | 2 |
More ▼ |
Source
ETS Research Report Series | 1 |
Educational Studies in… | 1 |
Educational and Psychological… | 1 |
Language Testing | 1 |
Practical Assessment,… | 1 |
ProQuest LLC | 1 |
Author
Bennett, Randy Elliot | 1 |
Cao, Yi | 1 |
Jones, Ian | 1 |
Kaplan, Randy M. | 1 |
Lee, Shinhye | 1 |
Lynch, Sarah | 1 |
Markus T. Jansen | 1 |
Ralf Schulze | 1 |
Sahin, Füsun | 1 |
Sangwin, Christopher J. | 1 |
Winke, Paula | 1 |
More ▼ |
Publication Type
Journal Articles | 5 |
Reports - Research | 4 |
Reports - Evaluative | 2 |
Dissertations/Theses -… | 1 |
Reports - Descriptive | 1 |
Speeches/Meeting Papers | 1 |
Audience
Location
Australia | 1 |
United Kingdom | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 1 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Markus T. Jansen; Ralf Schulze – Educational and Psychological Measurement, 2024
Thurstonian forced-choice modeling is considered to be a powerful new tool to estimate item and person parameters while simultaneously testing the model fit. This assessment approach is associated with the aim of reducing faking and other response tendencies that plague traditional self-report trait assessments. As a result of major recent…
Descriptors: Factor Analysis, Models, Item Analysis, Evaluation Methods
Lynch, Sarah – Practical Assessment, Research & Evaluation, 2022
In today's digital age, tests are increasingly being delivered on computers. Many of these computer-based tests (CBTs) have been adapted from paper-based tests (PBTs). However, this change in mode of test administration has the potential to introduce construct-irrelevant variance, affecting the validity of score interpretations. Because of this,…
Descriptors: Computer Assisted Testing, Tests, Scores, Scoring
Zhou, Jiawen; Cao, Yi – ETS Research Report Series, 2020
In this study, we explored retest effects on test scores and response time for repeaters, examinees who retake an examination. We looked at two groups of repeaters: those who took the same form twice and those who took different forms on their two attempts for a certification and licensure test. Scores improved over the two test attempts, and…
Descriptors: Testing, Test Items, Computer Assisted Testing, Licensing Examinations (Professions)
Sangwin, Christopher J.; Jones, Ian – Educational Studies in Mathematics, 2017
In this paper we report the results of an experiment designed to test the hypothesis that when faced with a question involving the inverse direction of a reversible mathematical process, students solve a multiple-choice version by verifying the answers presented to them by the direct method, not by undertaking the actual inverse calculation.…
Descriptors: Mathematics Achievement, Mathematics Tests, Multiple Choice Tests, Computer Assisted Testing
Lee, Shinhye; Winke, Paula – Language Testing, 2018
We investigated how young language learners process their responses on and perceive a computer-mediated, timed speaking test. Twenty 8-, 9-, and 10-year-old non-native English-speaking children (NNSs) and eight same-aged, native English-speaking children (NSs) completed seven computerized sample TOEFL® Primary™ speaking test tasks. We investigated…
Descriptors: Elementary School Students, Second Language Learning, Responses, Computer Assisted Testing
Sahin, Füsun – ProQuest LLC, 2017
Examining the testing processes, as well as the scores, is needed for a complete understanding of validity and fairness of computer-based assessments. Examinees' rapid-guessing and insufficient familiarity with computers have been found to be major issues that weaken the validity arguments of scores. This study has three goals: (a) improving…
Descriptors: Computer Assisted Testing, Evaluation Methods, Student Evaluation, Guessing (Tests)
Kaplan, Randy M.; Bennett, Randy Elliot – 1994
This study explores the potential for using a computer-based scoring procedure for the formulating-hypotheses (F-H) item. This item type presents a situation and asks the examinee to generate explanations for it. Each explanation is judged right or wrong, and the number of creditable explanations is summed to produce an item score. Scores were…
Descriptors: Automation, Computer Assisted Testing, Correlation, Higher Education
Wise, Steven L. – 1996
In recent years, a controversy has arisen about the advisability of allowing examinees to review their test items and possibly change answers. Arguments for and against allowing item review are discussed, and issues that a test designer should consider when designing a Computerized Adaptive Test (CAT) are identified. Most CATs do not allow…
Descriptors: Achievement Gains, Adaptive Testing, Computer Assisted Testing, Error Correction