NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 3 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mirjam de Vreeze-Westgeest; Sara Mata; Francisca Serrano; Wilma Resing; Bart Vogelaar – European Journal of Psychology and Educational Research, 2023
The current study aimed to investigate the effectiveness of an online dynamic test in reading and writing, differentiating in typically developing children (n = 47) and children diagnosed with dyslexia (n = 30) aged between nine and twelve years. In doing so, it was analysed whether visual working memory, auditory working memory, inhibition,…
Descriptors: Computer Assisted Testing, Reading Tests, Writing Tests, Executive Function
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yamamoto, Kentaro; He, Qiwei; Shin, Hyo Jeong; von Davier, Mattias – ETS Research Report Series, 2017
Approximately a third of the Programme for International Student Assessment (PISA) items in the core domains (math, reading, and science) are constructed-response items and require human coding (scoring). This process is time-consuming, expensive, and prone to error as often (a) humans code inconsistently, and (b) coding reliability in…
Descriptors: Foreign Countries, Achievement Tests, International Assessment, Secondary School Students
Peer reviewed Peer reviewed
van den Bergh, Huub – Applied Psychological Measurement, 1990
In this study, 590 third graders from 12 Dutch schools took 32 tests indicating 16 semantic Structure-of-Intellect (SI) abilities and 1 of 4 reading comprehension tests, involving either multiple-choice or open-ended items. Results indicate that item type for reading comprehension is congeneric with respect to SI abilities measured. (TJH)
Descriptors: Comparative Testing, Computer Assisted Testing, Construct Validity, Elementary Education