Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 3 |
Descriptor
Accuracy | 3 |
College Students | 3 |
Test Format | 3 |
Biology | 1 |
Classification | 1 |
Comparative Analysis | 1 |
Computer Assisted Testing | 1 |
Computer Software | 1 |
Correlation | 1 |
Evolution | 1 |
Foreign Countries | 1 |
More ▼ |
Author
Bolsinova, Maria | 1 |
Brinkhuis, Matthieu J. S. | 1 |
Granena, Gisela | 1 |
Kim, Kerry J. | 1 |
Meir, Eli | 1 |
Pope, Denise S. | 1 |
Vida, Leonardo J. | 1 |
Wendel, Daniel | 1 |
Publication Type
Reports - Research | 3 |
Journal Articles | 2 |
Speeches/Meeting Papers | 1 |
Education Level
Higher Education | 3 |
Postsecondary Education | 3 |
Audience
Location
Netherlands | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Vida, Leonardo J.; Bolsinova, Maria; Brinkhuis, Matthieu J. S. – International Educational Data Mining Society, 2021
The quality of exams drives test-taking behavior of examinees and is a proxy for the quality of teaching. As most university exams have strict time limits, and speededness is an important measure of the cognitive state of examinees, this might be used to assess the connection between exams' quality and examinees' performance. The practice of…
Descriptors: Accuracy, Test Items, Tests, Student Behavior
Granena, Gisela – Studies in Second Language Acquisition, 2019
This study investigated the underlying structure of a set of eight cognitive tests from the two most recent language aptitude test batteries: the LLAMA (Meara, 2005) and the Hi-LAB (Linck et al., 2013) to see whether they had any underlying constructs in common. The study also examined whether any of the observed constructs could predict L2…
Descriptors: Second Language Learning, Intelligence Tests, Memory, Language Aptitude
Kim, Kerry J.; Meir, Eli; Pope, Denise S.; Wendel, Daniel – Journal of Educational Data Mining, 2017
Computerized classification of student answers offers the possibility of instant feedback and improved learning. Open response (OR) questions provide greater insight into student thinking and understanding than more constrained multiple choice (MC) questions, but development of automated classifiers is more difficult, often requiring training a…
Descriptors: Classification, Computer Assisted Testing, Multiple Choice Tests, Test Format