NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)1
Since 2006 (last 20 years)3
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 36 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sampson, Demetrios G., Ed.; Ifenthaler, Dirk, Ed.; Isaías, Pedro, Ed.; Mascia, Maria Lidia, Ed. – International Association for Development of the Information Society, 2019
These proceedings contain the papers of the 16th International Conference on Cognition and Exploratory Learning in the Digital Age (CELDA 2019), held during November 7-9, 2019, which has been organized by the International Association for Development of the Information Society (IADIS) and co-organised by University Degli Studi di Cagliari, Italy.…
Descriptors: Teaching Methods, Cooperative Learning, Engineering Education, Critical Thinking
Peer reviewed Peer reviewed
Wise, Steven L.; Finney, Sara J.; Enders, Craig K.; Freeman, Sharon A.; Severance, Donald D. – Applied Measurement in Education, 1999
Examined whether providing item review on a computerized adaptive test could be used by examinees to inflate their scores. Two studies involving 139 undergraduates suggest that examinees are not highly proficient at discriminating item difficulty. A simulation study showed the usefulness of a strategy identified by G. Kingsbury (1996) as a way to…
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Higher Education
Peer reviewed Peer reviewed
Rocklin, Thomas; O'Donnell, Angela M. – Journal of Educational Psychology, 1987
An experiment was conducted that contrasted a variant of computerized adaptive testing, self-adapted testing, with two traditional tests. Participants completed a self-report of text anxiety and were randomly assigned to take one of the three tests of verbal ability. Subjects generally chose more difficult items as the test progressed. (Author/LMO)
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Al-A'ali, Mansoor – Educational Technology & Society, 2007
Computer adaptive testing is the study of scoring tests and questions based on assumptions concerning the mathematical relationship between examinees' ability and the examinees' responses. Adaptive student tests, which are based on item response theory (IRT), have many advantages over conventional tests. We use the least square method, a…
Descriptors: Educational Testing, Higher Education, Elementary Secondary Education, Student Evaluation
Wise, Steven L.; And Others – 1997
The degree to which item review on a computerized adaptive test (CAT) could be used by examinees to inflate their scores artificially was studied. G. G. Kingsbury (1996) described a strategy in which examinees could use the changes in item difficulty during a CAT to determine which items' answers are incorrect and should be changed during item…
Descriptors: Achievement Gains, Adaptive Testing, College Students, Computer Assisted Testing
Lazarte, Alejandro A. – 1999
Two experiments reproduced in a simulated computerized test-taking situation the effect of two of the main determinants in answering an item in a test: the difficulty of the item and the time available to answer it. A model is proposed for the time to respond or abandon an item and for the probability of abandoning it or answering it correctly. In…
Descriptors: Computer Assisted Testing, Difficulty Level, Higher Education, Probability
Peer reviewed Peer reviewed
Kobrin, Jennifer L.; Young, John W. – Applied Measurement in Education, 2003
Studied the cognitive equivalence of computerized and paper-and-pencil reading comprehension tests using verbal protocol analysis. Results for 48 college students indicate that the only significant difference between the computerized and paper-and-pencil tests was in the frequency of identifying important information in the passage. (SLD)
Descriptors: Cognitive Processes, College Students, Computer Assisted Testing, Difficulty Level
PDF pending restoration PDF pending restoration
Plake, Barbara S.; And Others – 1994
In self-adapted testing (SAT), examinees select the difficulty level of items administered. This study investigated three variations of prior information provided when taking an SAT: (1) no information (examinees selected item difficulty levels without prior information); (2) view (examinees inspected a typical item from each difficulty level…
Descriptors: Adaptive Testing, College Students, Computer Assisted Testing, Difficulty Level
Peer reviewed Peer reviewed
Plake, Barbara S.; And Others – Educational and Psychological Measurement, 1995
No significant differences in performance on a self-adapted test or anxiety were found for college students (n=218) taking a self-adapted test who selected item difficulty without any prior information, inspected an item before selecting, or answered a typical item and received performance feedback. (SLD)
Descriptors: Achievement, Adaptive Testing, College Students, Computer Assisted Testing
Bennett, Randy Elliot; Rock, Donald A. – 1993
Formulating-Hypotheses (F-H) items present a situation and ask the examinee to generate as many explanations for it as possible. This study examined the generalizability, validity, and examinee perceptions of a computer-delivered version of the task. Eight F-H questions were administered to 192 graduate students. Half of the items restricted…
Descriptors: Computer Assisted Testing, Difficulty Level, Generalizability Theory, Graduate Students
PDF pending restoration PDF pending restoration
Anderson, Paul S.; Hyers, Albert D. – 1991
Three descriptive statistics (difficulty, discrimination, and reliability) of multiple-choice (MC) test items were compared to those of a new (1980s) format of machine-scored questions. The new method, answer-bank multi-digit testing (MDT), uses alphabetized lists of up to 1,000 alternatives and approximates the completion style of assessment…
Descriptors: College Students, Comparative Testing, Computer Assisted Testing, Correlation
Thompson, Bruce; Levitov, Justin E. – Collegiate Microcomputer, 1985
Discusses features of a microcomputer program, SCOREIT, used at New Orleans' Loyola University and several high schools to score and analyze test results. Benefits and dimensions of the program's automated test and item analysis are outlined, and several examples illustrating test and item analyses by SCOREIT are presented. (MBR)
Descriptors: Computer Assisted Testing, Computer Software, Difficulty Level, Higher Education
Roos, Linda L.; Wise, Steven L.; Finney, Sara J. – 1998
Previous studies have shown that, when administered a self-adapted test, a few examinees will choose item difficulty levels that are not well-matched to their proficiencies, resulting in high standard errors of proficiency estimation. This study investigated whether the previously observed effects of a self-adapted test--lower anxiety and higher…
Descriptors: Adaptive Testing, College Students, Comparative Analysis, Computer Assisted Testing
Wise, Steven L.; And Others – 1991
According to item response theory (IRT), examinee ability estimation is independent of the particular set of test items administered from a calibrated pool. Although the most popular application of this feature of IRT is computerized adaptive (CA) testing, a recently proposed alternative is self-adapted (SA) testing, in which examinees choose the…
Descriptors: Ability Identification, Adaptive Testing, College Students, Comparative Testing
Rocklin, Thomas – 1989
In self-adapted testing, examinees are allowed to choose the difficulty of each item to be presented immediately before attempting it. Previous research has demonstrated that self-adapted testing leads to better performance than do fixed-order tests and is preferred by examinees. The present study examined the strategies that 29 college students…
Descriptors: Adaptive Testing, Attribution Theory, College Students, Computer Assisted Testing
Previous Page | Next Page »
Pages: 1  |  2  |  3