NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 15 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Stenger, Rachel; Olson, Kristen; Smyth, Jolene D. – Field Methods, 2023
Questionnaire designers use readability measures to ensure that questions can be understood by the target population. The most common measure is the Flesch-Kincaid Grade level, but other formulas exist. This article compares six different readability measures across 150 questions in a self-administered questionnaire, finding notable variation in…
Descriptors: Readability, Readability Formulas, Computer Assisted Testing, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Chan, Cecilia Ka Yuk – Assessment & Evaluation in Higher Education, 2023
With the advances of technologies, possessing digital and information literacy is crucial for the selection of candidates by employers in this digital AI era. For most students, receiving and outputting electronic text has become the norm, and thus examinations with writing components done by hand may not accurately reflect their abilities. It…
Descriptors: Test Format, Handwriting, Stakeholders, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
Sanders, Benjamin W.; Bedrick, Steven; Broder-Fingert, Sarabeth; Brown, Shannon A.; Dolata, Jill K.; Fombonne, Eric; Reeder, Julie A.; Rivas Vazquez, Luis Andres; Fuchu, Plyce; Morales, Yesenia; Zuckerman, Katharine E. – Autism: The International Journal of Research and Practice, 2023
Limited access to screening and evaluation for autism spectrum disorder in children is a major barrier to improving outcomes for marginalized families. To identify and evaluate available digital autism spectrum disorder screening resources, we simulated web and mobile app searches by a parent concerned about their child's likelihood of autism…
Descriptors: Screening Tests, Autism Spectrum Disorders, Computer Assisted Testing, Parent Attitudes
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Solnyshkina, Marina I.; Zamaletdinov, Radif R.; Gorodetskaya, Ludmila A.; Gabitov, Azat I. – Journal of Social Studies Education Research, 2017
The article presents the results of an exploratory study of the use of T.E.R.A., an automated tool measuring text complexity and readability based on the assessment of five text complexity parameters: narrativity, syntactic simplicity, word concreteness, referential cohesion and deep cohesion. Aimed at finding ways to utilize T.E.R.A. for…
Descriptors: Readability Formulas, Readability, Foreign Countries, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Taylor, Zachary W. – International Journal of Higher Education, 2017
A recent Educational Testing Services report (2016) found that international graduate students with a TOEFL score of 80--the minimum average TOEFL score for graduate admission in the United States--usually possess reading subscores of 20, equating to a 12th-grade reading comprehension level. However, one public flagship university's international…
Descriptors: Foreign Students, Graduate Students, Reading Comprehension, College Admission
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Nese, Joseph F. T.; Kahn, Josh; Kamata, Akihito – Grantee Submission, 2017
Despite prevalent use and practical application, the current and standard assessment of oral reading fluency (ORF) presents considerable limitations which reduces its validity in estimating growth and monitoring student progress, including: (a) high cost of implementation; (b) tenuous passage equivalence; and (c) bias, large standard error, and…
Descriptors: Automation, Speech, Recognition (Psychology), Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Thompson, Meredith Myra; Braude, Eric John – Journal of Educational Computing Research, 2016
The assessment of learning in large online courses requires tools that are valid, reliable, easy to administer, and can be automatically scored. We have evaluated an online assessment and learning tool called Knowledge Assembly, or Knowla. Knowla measures a student's knowledge in a particular subject by having the student assemble a set of…
Descriptors: Computer Assisted Testing, Teaching Methods, Online Courses, Critical Thinking
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ebadi, Saman; Saeedian, Abdulbaset – Iranian Journal of Language Teaching Research, 2016
Derived from Vygotsky's works, dynamic assessment (DA) enables learners to move beyond their current level of functioning through offering needs-sensitized mediation. This study aimed at exploring the learners' development in novel and increasingly more challenging situations called transcendence (TR) in an L2 context focusing on reading…
Descriptors: English (Second Language), Second Language Learning, Computer Assisted Testing, Language Tests
Peer reviewed Peer reviewed
Anderson, Jonathan – Journal of Research in Reading, 1983
Reports a number of modifications to the computer readability program STAR (Simple Tests Approach to Readability) designed to make it more useful. (FL)
Descriptors: Computer Assisted Testing, Content Analysis, Readability, Readability Formulas
Peer reviewed Peer reviewed
Duffelmeyer, Frederick A. – Reading Teacher, 1985
Argues that, while computers make the computation of readability formulas easy, teachers should not forget the need to apply judgment and common sense to the results. Discusses RIXRATE, a computerized version of the Rix index and compares its performance to that of the Rauding Scale. (FL)
Descriptors: Computer Assisted Testing, Elementary Secondary Education, Microcomputers, Readability
Peer reviewed Peer reviewed
Rush, R. Timothy – Reading Teacher, 1985
Discusses the characteristics of three popular readability formulas: the Dale-Chall, the Fry Graph, and the Spache. Describes text based and reader/text based alternatives. Offers appropriate applications of each form of assessment. (FL)
Descriptors: Computer Assisted Testing, Elementary Education, Evaluation Methods, Readability
Spiegel, Glenn; Campbell, John J. – 1985
The Flesch readability index yields meaningful information about the responses of readers to texts. Because the formula is so simple, a group of English teachers wrote a program in BASIC that would count some obvious surface features of a text and calculate Flesch scores. Among the programing problems encountered were counting words (taking into…
Descriptors: Computer Assisted Testing, Computer Software, Higher Education, Measurement Techniques
Hamel, Cheryl J.; And Others – 1982
To ensure that the essential job-related reading materials for nonrated United States Navy personnel were not beyond their reading capabilities, a study was undertaken to determine the readability levels of a representative sample of essential Navy job-related materials. The criteria for selecting material were that it be narrative text and that…
Descriptors: Computer Assisted Testing, Military Personnel, Military Training, Readability
Peer reviewed Peer reviewed
Britton, Gwyneth; Lumpkin, Margarte – Reading Psychology, 1982
Subjecting the comprehension passages of the Gates-McGinitie Reading Test to readability analysis using a multiformula computer program revealed that the instrument can best be used for making comparisons between groups using the same test forms and levels in the same sequence. (FL)
Descriptors: Computer Assisted Testing, Elementary Secondary Education, Readability, Reading Comprehension
Vrasidas, Charalambos; Lantz, Chris – 1995
This paper describes a study in which a Picture Readability Index (PRI) was used to investigate initial and extended perceptions of photographs. Readability criteria for evaluating instructional text seems to have been in place for a long time, yet instructional visuals like photographs and illustrations have typically been subject to no such…
Descriptors: Adaptive Testing, Cognitive Processes, Computer Assisted Testing, Evaluation Criteria