Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 5 |
Descriptor
Difficulty Level | 20 |
Test Items | 20 |
Timed Tests | 20 |
Computer Assisted Testing | 6 |
Item Response Theory | 6 |
Reaction Time | 6 |
Higher Education | 5 |
Responses | 5 |
Estimation (Mathematics) | 4 |
Scores | 4 |
Test Construction | 4 |
More ▼ |
Source
Journal of Educational… | 2 |
Anatomical Sciences Education | 1 |
Assessment & Evaluation in… | 1 |
Educational and Psychological… | 1 |
Journal of Advanced Academics | 1 |
Measurement:… | 1 |
Psychometrika | 1 |
Author
Schnipke, Deborah L. | 2 |
Azano, Amy | 1 |
Becker, Benjamin | 1 |
Bergstrom, Betty | 1 |
Bridgeman, Brent | 1 |
Callahan, Carolyn M. | 1 |
Carolin Hahnel | 1 |
Cline, Frederick | 1 |
Debeer, Dries | 1 |
Frank Goldhammer | 1 |
Glas, C. A. W. | 1 |
More ▼ |
Publication Type
Reports - Research | 14 |
Speeches/Meeting Papers | 11 |
Journal Articles | 8 |
Reports - Evaluative | 4 |
Reports - Descriptive | 2 |
Education Level
Higher Education | 3 |
Postsecondary Education | 2 |
Elementary Education | 1 |
Grade 3 | 1 |
Audience
Researchers | 2 |
Location
Laws, Policies, & Programs
Assessments and Surveys
Comprehensive Tests of Basic… | 1 |
Graduate Record Examinations | 1 |
New Jersey College Basic… | 1 |
SAT (College Admission Test) | 1 |
Stanford Achievement Tests | 1 |
What Works Clearinghouse Rating
Frank Goldhammer; Ulf Kroehne; Carolin Hahnel; Johannes Naumann; Paul De Boeck – Journal of Educational Measurement, 2024
The efficiency of cognitive component skills is typically assessed with speeded performance tests. Interpreting only effective ability or effective speed as efficiency may be challenging because of the within-person dependency between both variables (speed-ability tradeoff, SAT). The present study measures efficiency as effective ability…
Descriptors: Timed Tests, Efficiency, Scores, Test Interpretation
Item Order and Speededness: Implications for Test Fairness in Higher Educational High-Stakes Testing
Becker, Benjamin; van Rijn, Peter; Molenaar, Dylan; Debeer, Dries – Assessment & Evaluation in Higher Education, 2022
A common approach to increase test security in higher educational high-stakes testing is the use of different test forms with identical items but different item orders. The effects of such varied item orders are relatively well studied, but findings have generally been mixed. When multiple test forms with different item orders are used, we argue…
Descriptors: Information Security, High Stakes Tests, Computer Security, Test Items
Goldhammer, Frank – Measurement: Interdisciplinary Research and Perspectives, 2015
The main challenge of ability tests relates to the difficulty of items, whereas speed tests demand that test takers complete very easy items quickly. This article proposes a conceptual framework to represent how performance depends on both between-person differences in speed and ability and the speed-ability compromise within persons. Related…
Descriptors: Ability, Aptitude Tests, Reaction Time, Test Items
Sagoo, Mandeep Gill; Smith, Claire France; Gosden, Edward – Anatomical Sciences Education, 2016
The objective structural practical examination (OSPE) is a timed examination that assesses topographical and/or applied knowledge of anatomy with the use of cadaveric resources and medical images. This study investigated whether elements of question design (provision of clinical context, type of visual resources used, gender context, and…
Descriptors: Anatomy, Timed Tests, Instructional Design, Test Items
Hailey, Emily; Callahan, Carolyn M.; Azano, Amy; Moon, Tonya R. – Journal of Advanced Academics, 2012
Reliability and validity are integral concepts in assessment design. Test speededness, the influence of time constraints on test taker performance, is often an overlooked threat to reliability and validity, especially in classroom-based testing. The purpose of this study is to evaluate the degree of test speededness of classroom-based assessments…
Descriptors: Academically Gifted, Student Evaluation, Validity, Grade 3
Jansen, M. G. H.; Glas, C. A. W. – Psychometrika, 2005
Two new tests for a model for the response times on pure speed tests by Rasch (1960) are proposed. The model is based on the assumption that the test response times are approximately gamma distributed, with known index parameters and unknown rate parameters. The rate parameters are decomposed in a subject ability parameter and a test difficulty…
Descriptors: Timed Tests, Reaction Time, Models, Difficulty Level
Schnipke, Deborah L. – 1996
When running out of time on a multiple-choice test, some examinees are likely to respond rapidly to the remaining unanswered items in an attempt to get some items right by chance. Because these responses will tend to be incorrect, the presence of "rapid-guessing behavior" could cause these items to appear to be more difficult than they…
Descriptors: Difficulty Level, Estimation (Mathematics), Guessing (Tests), Item Response Theory

Rindler, Susan Ellerin – Educational and Psychological Measurement, 1980
A short verbal aptitude test was administered under varying time limits with answer sheets specially designed to allow items that had been skipped to be identified. It appeared advantageous for the more able (based on grade point averages) but disadvantageous for the less able to skip items. (Author/RL)
Descriptors: Aptitude Tests, Difficulty Level, Higher Education, Response Style (Tests)
Bridgeman, Brent; Cline, Frederick – Journal of Educational Measurement, 2004
Time limits on some computer-adaptive tests (CATs) are such that many examinees have difficulty finishing, and some examinees may be administered tests with more time-consuming items than others. Results from over 100,000 examinees suggested that about half of the examinees must guess on the final six questions of the analytical section of the…
Descriptors: Guessing (Tests), Timed Tests, Adaptive Testing, Computer Assisted Testing
Making Use of Response Times in Standardized Tests: Are Accuracy and Speed Measuring the Same Thing?
Scrams, David J.; Schnipke, Deborah L. – 1997
Response accuracy and response speed provide separate measures of performance. Psychometricians have tended to focus on accuracy with the goal of characterizing examinees on the basis of their ability to respond correctly to items from a given content domain. With the advent of computerized testing, response times can now be recorded unobtrusively…
Descriptors: Computer Assisted Testing, Difficulty Level, Item Response Theory, Psychometrics
Lazarte, Alejandro A. – 1999
Two experiments reproduced in a simulated computerized test-taking situation the effect of two of the main determinants in answering an item in a test: the difficulty of the item and the time available to answer it. A model is proposed for the time to respond or abandon an item and for the probability of abandoning it or answering it correctly. In…
Descriptors: Computer Assisted Testing, Difficulty Level, Higher Education, Probability
Halkitis, Perry N.; And Others – 1996
The relationship between test item characteristics and testing time was studied for a computer-administered licensing examination. One objective of the study was to develop a model to predict testing time on the basis of known item characteristics. Response latencies (i.e., the amount of time taken by examinees to read, review, and answer items)…
Descriptors: Computer Assisted Testing, Difficulty Level, Estimation (Mathematics), Licensing Examinations (Professions)
Bergstrom, Betty; And Others – 1994
Examinee response times from a computerized adaptive test taken by 204 examinees taking a certification examination were analyzed using a hierarchical linear model. Two equations were posed: a within-person model and a between-person model. Variance within persons was eight times greater than variance between persons. Several variables…
Descriptors: Adaptive Testing, Adults, Certification, Computer Assisted Testing
Wainer, Howard – 1985
It is important to estimate the number of examinees who reached a test item, because item difficulty is defined by the number who answered correctly divided by the number who reached the item. A new method is presented and compared to the previously used definition of three categories of response to an item: (1) answered; (2) omitted--a…
Descriptors: College Entrance Examinations, Difficulty Level, Estimation (Mathematics), High Schools
Schaeffer, Evonne L. – 1993
Context effects in test taking were explored, paying attention to the psychological processes that occur during test taking, and modeling context effects for each individual at the item block level. A sample of 279 high school students (140 females and 139 males) was chosen to yield adequate power for detecting interactions. Reading test forms…
Descriptors: Cognitive Processes, Context Effect, Difficulty Level, High School Students
Previous Page | Next Page ยป
Pages: 1 | 2