Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 5 |
Descriptor
Computer Assisted Testing | 15 |
Timed Tests | 15 |
Test Items | 7 |
Adaptive Testing | 6 |
Psychometrics | 4 |
Difficulty Level | 3 |
Guessing (Tests) | 3 |
Higher Education | 3 |
Item Response Theory | 3 |
Reaction Time | 3 |
Responses | 3 |
More ▼ |
Source
Journal of Educational… | 2 |
Educational Measurement:… | 1 |
International Journal of… | 1 |
Journal of Behavioral… | 1 |
Journal of Educational… | 1 |
Learning Disability Quarterly | 1 |
Measurement:… | 1 |
Teaching of Psychology | 1 |
Author
Publication Type
Reports - Evaluative | 15 |
Journal Articles | 9 |
Speeches/Meeting Papers | 6 |
Numerical/Quantitative Data | 1 |
Opinion Papers | 1 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 3 |
Elementary Education | 1 |
Elementary Secondary Education | 1 |
Postsecondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 1 |
What Works Clearinghouse Rating
Wise, Steven L. – Measurement: Interdisciplinary Research and Perspectives, 2015
The growing presence of computer-based testing has brought with it the capability to routinely capture the time that test takers spend on individual test items. This, in turn, has led to an increased interest in potential applications of response time in measuring intellectual ability and achievement. Goldhammer (this issue) provides a very useful…
Descriptors: Reaction Time, Measurement, Computer Assisted Testing, Achievement Tests
Schmitt, T. A.; Sass, D. A.; Sullivan, J. R.; Walker, C. M. – International Journal of Testing, 2010
Imposed time limits on computer adaptive tests (CATs) can result in examinees having difficulty completing all items, thus compromising the validity and reliability of ability estimates. In this study, the effects of speededness were explored in a simulated CAT environment by varying examinee response patterns to end-of-test items. Expectedly,…
Descriptors: Monte Carlo Methods, Simulation, Computer Assisted Testing, Adaptive Testing
Performance Indicators in Math: Implications for Brief Experimental Analysis of Academic Performance
VanDerheyden, Amanda M.; Burns, Matthew K. – Journal of Behavioral Education, 2009
Brief experimental analysis (BEA) can be used to specify intervention characteristics that produce positive learning gains for individual students. A key challenge to the use of BEA for intervention planning is the identification of performance indicators (including topography of the skill, measurement characteristics, and decision criteria) that…
Descriptors: Intervention, Curriculum Based Assessment, Mathematics Skills, Educational Indicators
Brothen, Thomas; Wambach, Cathrine – Teaching of Psychology, 2004
This study evaluated 15-min time limits on 10-item multiple-choice quizzes delivered over the Internet. Students in a computer-assisted course in human development spent less time on quizzes and performed better on exams when they had time limits on their quizzes. We conclude that time limits are associated with better learning and exam…
Descriptors: Internet, Timed Tests, Computer Assisted Testing
Beaujean, A. Alexander; Knoop, Andrew; Holliday, Gregory – Learning Disability Quarterly, 2006
The purpose of this pilot study was to determine if a single math-based chronometric task could accurately discriminate between college students with and without a diagnosed math disorder. Analyzing data from 31 students (6 in the case group, 25 in the clinical comparison group), it was found that the single chronometric task could accurately…
Descriptors: Psychometrics, College Students, Predictor Variables, Educational Diagnosis
Bodmann, Shawn M.; Robinson, Daniel H. – Journal of Educational Computing Research, 2004
This study investigated the effect of several different modes of test administration on scores and completion times. In Experiment 1, paper-based assessment was compared to computer-based assessment. Undergraduates completed the computer-based assessment faster than the paper-based assessment, with no difference in scores. Experiment 2 assessed…
Descriptors: Computer Assisted Testing, Higher Education, Undergraduate Students, Evaluation Methods

Adema, Jos J. – Journal of Educational Measurement, 1990
Mixed integer linear programing models for customizing two-stage tests are presented. Model constraints are imposed with respect to test composition, administration time, inter-item dependencies, and other practical considerations. The models can be modified for use in the construction of multistage tests. (Author/TJH)
Descriptors: Adaptive Testing, Computer Assisted Testing, Equations (Mathematics), Linear Programing
Bridgeman, Brent; Cline, Frederick – Journal of Educational Measurement, 2004
Time limits on some computer-adaptive tests (CATs) are such that many examinees have difficulty finishing, and some examinees may be administered tests with more time-consuming items than others. Results from over 100,000 examinees suggested that about half of the examinees must guess on the final six questions of the analytical section of the…
Descriptors: Guessing (Tests), Timed Tests, Adaptive Testing, Computer Assisted Testing
Making Use of Response Times in Standardized Tests: Are Accuracy and Speed Measuring the Same Thing?
Scrams, David J.; Schnipke, Deborah L. – 1997
Response accuracy and response speed provide separate measures of performance. Psychometricians have tended to focus on accuracy with the goal of characterizing examinees on the basis of their ability to respond correctly to items from a given content domain. With the advent of computerized testing, response times can now be recorded unobtrusively…
Descriptors: Computer Assisted Testing, Difficulty Level, Item Response Theory, Psychometrics
Lazarte, Alejandro A. – 1999
Two experiments reproduced in a simulated computerized test-taking situation the effect of two of the main determinants in answering an item in a test: the difficulty of the item and the time available to answer it. A model is proposed for the time to respond or abandon an item and for the probability of abandoning it or answering it correctly. In…
Descriptors: Computer Assisted Testing, Difficulty Level, Higher Education, Probability
Wise, Steven L.; Bhola, Dennison S.; Yang, Sheng-Ta – Educational Measurement: Issues and Practice, 2006
The attractiveness of computer-based tests (CBTs) is due largely to their capability to expand the ways we conduct testing. A relatively unexplored application, however, is actively using the computer to reduce construct-irrelevant variance while a test is being administered. This investigation introduces the effort-monitoring CBT, in which the…
Descriptors: Computer Assisted Testing, Test Validity, Reaction Time, Guessing (Tests)
Slater, Sharon C.; Schaeffer, Gary A. – 1996
The General Computer Adaptive Test (CAT) of the Graduate Record Examinations (GRE) includes three operational sections that are separately timed and scored. A "no score" is reported if the examinee answers fewer than 80% of the items or if the examinee does not answer all of the items and leaves the section before time expires. The 80%…
Descriptors: Adaptive Testing, College Students, Computer Assisted Testing, Equal Education
Schnipke, Deborah L.; Pashley, Peter J. – 1997
Differences in test performance on time-limited tests may be due in part to differential response-time rates between subgroups, rather than real differences in the knowledge, skills, or developed abilities of interest. With computer-administered tests, response times are available and may be used to address this issue. This study investigates…
Descriptors: Computer Assisted Testing, Data Analysis, English, High Stakes Tests
Wise, Steven L. – 1997
The perspective of the examinee during the administration of a computerized adaptive test (CAT) is discussed, focusing on issues of test development. Item review is the first issue discussed. Virtually no CATs provide the opportunity for the examinee to go back and review, and possibly change, answers. There are arguments on either side of the…
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Attitudes, Equal Education
Wise, Steven L. – 1996
In recent years, a controversy has arisen about the advisability of allowing examinees to review their test items and possibly change answers. Arguments for and against allowing item review are discussed, and issues that a test designer should consider when designing a Computerized Adaptive Test (CAT) are identified. Most CATs do not allow…
Descriptors: Achievement Gains, Adaptive Testing, Computer Assisted Testing, Error Correction