Publication Date
In 2025 | 1 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 19 |
Since 2016 (last 10 years) | 36 |
Since 2006 (last 20 years) | 76 |
Descriptor
Computer Assisted Testing | 98 |
Vocabulary | 98 |
Language Tests | 31 |
Foreign Countries | 29 |
Test Items | 27 |
English (Second Language) | 23 |
College Students | 19 |
Reading Comprehension | 19 |
Test Construction | 19 |
Scores | 17 |
Second Language Learning | 17 |
More ▼ |
Source
Author
Alonzo, Julie | 9 |
Anderson, Daniel | 9 |
Tindal, Gerald | 9 |
Park, Bitnara Jasmine | 7 |
Petscher, Yaacov | 6 |
Ben Seipel | 4 |
Mark L. Davison | 4 |
Sarah E. Carlson | 4 |
Virginia Clinton-Lisell | 4 |
Vispoel, Walter P. | 4 |
Foorman, Barbara R. | 3 |
More ▼ |
Publication Type
Education Level
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Ben Seipel; Patrick C. Kennedy; Sarah E. Carlson; Virginia Clinton-Lisell; Mark L. Davison – Journal of Learning Disabilities, 2023
As access to higher education increases, it is important to monitor students with special needs to facilitate the provision of appropriate resources and support. Although metrics such as the "reading readiness" ACT (formerly American College Testing) of provide insight into how many students may need such resources, they do not specify…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Reading Tests, Reading Comprehension
Ben Seipel; Patrick C. Kennedy; Sarah E. Carlson; Virginia Clinton-Lisell; Mark L. Davison – Grantee Submission, 2022
As access to higher education increases, it is important to monitor students with special needs to facilitate the provision of appropriate resources and support. Although metrics such as ACT's (formerly American College Testing) "reading readiness" provide insight into how many students may need such resources, they do not specify…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Reading Tests, Reading Comprehension
Mimi Ismail; Ahmed Al - Badri; Said Al - Senaidi – Journal of Education and e-Learning Research, 2025
This study aimed to reveal the differences in individuals' abilities, their standard errors, and the psychometric properties of the test according to the two methods of applying the test (electronic and paper). The descriptive approach was used to achieve the study's objectives. The study sample consisted of 74 male and female students at the…
Descriptors: Achievement Tests, Computer Assisted Testing, Psychometrics, Item Response Theory
Krach, Shelley Kathleen; McCreery, Michael P.; Dennis, Lindsay; Guerard, Jessika; Harris, Erica L. – Psychology in the Schools, 2020
Pearson now uses a technology-based testing platform, Q-Interactive, to administer tests previously available in paper versions. The same norms are used for both versions; Pearson's in-house equivalency studies indicated that both versions are equated. The goal of the current study is to independently evaluate equivalency findings. For the current…
Descriptors: Preschool Children, Computer Assisted Testing, Test Items, Scores
Virginia Clinton-Lisell; Terrill Taylor; Sarah E. Carlson; Mark L. Davison; Ben Seipel – Grantee Submission, 2022
Standardized reading assessments are often used as an admissions criterion for college admittance, however, the relationship and predictive validity of reading assessments to academic achievement remains in question. Through a quantitative review of the literature, we conducted a meta-analysis to examine how well performance on college reading…
Descriptors: Reading Achievement, Reading Comprehension, Reading Tests, Academic Achievement
Virginia Clinton-Lisell; Terrill Taylor; Sarah E. Carlson; Mark L. Davison; Ben Seipel – Journal of College Reading and Learning, 2022
Reading comprehension assessments are used for postsecondary course placement and advising, and they are components of college entrance exams. Therefore, a quantitative understanding of the relationship between reading comprehension assessments and postsecondary academic achievement is needed. To address this need, we conducted a meta-analysis to…
Descriptors: Reading Achievement, Reading Comprehension, Reading Tests, Academic Achievement
Bartsch, Lea M.; Shepherdson, Peter – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2022
Previous research indicates that long-term memory (LTM) may contribute to performance in working memory (WM) tasks. Across 3 experiments, we investigated the extent to which active maintenance in WM can be replaced by relying on information stored in episodic LTM, thereby freeing capacity for additional information in WM. First, participants…
Descriptors: Short Term Memory, Task Analysis, Recall (Psychology), German
Goodwin, Amanda P.; Petscher, Yaacov; Tock, Jamie; McFadden, Sara; Reynolds, Dan; Lantos, Tess; Jones, Sara – Assessment for Effective Intervention, 2022
Assessment of language skills for upper elementary and middle schoolers is important due to the strong link between language and reading comprehension. Yet, currently few practical, reliable, valid, and instructionally informative assessments of language exist. This study provides validation evidence for Monster, P.I., which is a gamified,…
Descriptors: Adaptive Testing, Computer Assisted Testing, Language Tests, Vocabulary
Mingying Zheng – ProQuest LLC, 2024
The digital transformation in educational assessment has led to the proliferation of large-scale data, offering unprecedented opportunities to enhance language learning, and testing through machine learning (ML) techniques. Drawing on the extensive data generated by online English language assessments, this dissertation investigates the efficacy…
Descriptors: Artificial Intelligence, Computational Linguistics, Language Tests, English (Second Language)
Beaty, Roger E.; Johnson, Dan R.; Zeitlen, Daniel C.; Forthmann, Boris – Creativity Research Journal, 2022
Semantic distance is increasingly used for automated scoring of originality on divergent thinking tasks, such as the Alternate Uses Task (AUT). Despite some psychometric support for semantic distance -- including positive correlations with human creativity ratings -- additional work is needed to optimize its reliability and validity, including…
Descriptors: Semantics, Scoring, Creative Thinking, Creativity
Goodwin, Amanda P.; Petscher, Yaacov; Jones, Sara; McFadden, Sara; Reynolds, Dan; Lantos, Tess – Reading Teacher, 2020
The authors describe Monster, P.I., which is an app-based, gamified assessment that measures language skills (knowledge of morphology, vocabulary, and syntax) of students in grades 5-8 and provides teachers with interpretable score reports to drive instruction that improves vocabulary, reading, and writing ability. Specifically, the authors…
Descriptors: Computer Assisted Testing, Handheld Devices, Language Maintenance, Language Tests
Dalton, Sarah Grace; Stark, Brielle C.; Fromm, Davida; Apple, Kristen; MacWhinney, Brian; Rensch, Amanda; Rowedder, Madyson – Journal of Speech, Language, and Hearing Research, 2022
Purpose: The aim of this study was to advance the use of structured, monologic discourse analysis by validating an automated scoring procedure for core lexicon (CoreLex) using transcripts. Method: Forty-nine transcripts from persons with aphasia and 48 transcripts from persons with no brain injury were retrieved from the AphasiaBank database. Five…
Descriptors: Validity, Discourse Analysis, Databases, Scoring
Blomquist, Christina; McMurray, Bob – Developmental Psychology, 2023
As a spoken word unfolds over time, similar sounding words ("cap" and "cat") compete until one word "wins". Lexical competition becomes more efficient from infancy through adolescence. We examined one potential mechanism underlying this development: lexical inhibition, by which activated candidates suppress…
Descriptors: Speech Communication, Language Acquisition, Age Differences, Word Recognition
Goodwin, Amanda P.; Petscher, Yaacov; Jones, Sara; McFadden, Sara; Reynolds, Dan; Lantos, Tess – Grantee Submission, 2020
The authors describe Monster, PI, which is an app-based, gamified assessment that measures language skills (knowledge of morphology, vocabulary, and syntax) of students in grades 5-8 and provides teachers with interpretable score reports to drive instruction that improves vocabulary, reading, and writing ability. Specifically, the authors describe…
Descriptors: Computer Assisted Testing, Handheld Devices, Language Maintenance, Language Tests
Chen, Yi-Jui I.; Chen, Yi-Hsin; Anthony, Jason L.; Erazo, Noé A. – Journal of Psychoeducational Assessment, 2022
The Computer-based Orthographic Processing Assessment (COPA) is a newly developed assessment to measure orthographic processing skills, including rapid perception, access, differentiation, correction, and arrangement. In this study, cognitive diagnostic models were used to test if the dimensionality of the COPA conforms to theoretical expectation,…
Descriptors: Elementary School Students, Grade 2, Computer Assisted Testing, Orthographic Symbols