NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 4 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Sandra Nilsson; Elisabet Östlund; Yvonne Thalén; Ulrika Löfkvist – Journal of Speech, Language, and Hearing Research, 2025
Purpose: The Language ENvironment Analysis (LENA) is a technological tool designed for comprehensive recordings and automated analysis of young children's daily language and auditory environments. LENA recordings play a crucial role in both clinical interventions and research, offering insights into the amount of spoken language children are…
Descriptors: Foreign Countries, Family Environment, Toddlers, Oral Language
Peer reviewed Peer reviewed
Direct linkDirect link
Charles Hulme; Joshua McGrane; Mihaela Duta; Gillian West; Denise Cripps; Abhishek Dasgupta; Sarah Hearne; Rachel Gardner; Margaret Snowling – Language, Speech, and Hearing Services in Schools, 2024
Purpose: Oral language skills provide a critical foundation for formal education and especially for the development of children's literacy (reading and spelling) skills. It is therefore important for teachers to be able to assess children's language skills, especially if they are concerned about their learning. We report the development and…
Descriptors: Automation, Language Tests, Standardized Tests, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Martínez-Huertas, José Á.; Jastrzebska, Olga; Olmos, Ricardo; León, José A. – Assessment & Evaluation in Higher Education, 2019
Automated summary evaluation is proposed as an alternative to rubrics and multiple-choice tests in knowledge assessment. Inbuilt rubric is a recent Latent Semantic Analysis (LSA) method that implements rubrics in an artificially-generated semantic space. It was compared with classical LSA's cosine-based methods assessing knowledge in a…
Descriptors: Automation, Scoring Rubrics, Alternative Assessment, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Cohen, Yoav; Levi, Effi; Ben-Simon, Anat – Applied Measurement in Education, 2018
In the current study, two pools of 250 essays, all written as a response to the same prompt, were rated by two groups of raters (14 or 15 raters per group), thereby providing an approximation to the essay's true score. An automated essay scoring (AES) system was trained on the datasets and then scored the essays using a cross-validation scheme. By…
Descriptors: Test Validity, Automation, Scoring, Computer Assisted Testing