NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Selcuk Acar; Denis Dumas; Peter Organisciak; Kelly Berthiaume – Grantee Submission, 2024
Creativity is highly valued in both education and the workforce, but assessing and developing creativity can be difficult without psychometrically robust and affordable tools. The open-ended nature of creativity assessments has made them difficult to score, expensive, often imprecise, and therefore impractical for school- or district-wide use. To…
Descriptors: Thinking Skills, Elementary School Students, Artificial Intelligence, Measurement Techniques
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yaratan, Huseyin; Suphi, Nilgun – Turkish Online Journal of Educational Technology - TOJET, 2013
Questionnaires administered manually can cause surreptitious peer pressure on the candidate to finish when 'the others" have completed theirs, forcing students to rush or skip individual items or may hinder the ability of noticing participants who may be having difficulty understanding certain items. These drawbacks can have serious…
Descriptors: Synchronous Communication, Questionnaires, Computer Assisted Testing, Undergraduate Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Sinharay, Sandip – ETS Research Report Series, 2015
The "e-rater"® automated essay scoring system is used operationally in the scoring of the argument and issue tasks that form the Analytical Writing measure of the "GRE"® General Test. For each of these tasks, this study explored the value added of reporting 4 trait scores for each of these 2 tasks over the total e-rater score.…
Descriptors: Scores, Computer Assisted Testing, Computer Software, Grammar
Nering, Michael L., Ed.; Ostini, Remo, Ed. – Routledge, Taylor & Francis Group, 2010
This comprehensive "Handbook" focuses on the most used polytomous item response theory (IRT) models. These models help us understand the interaction between examinees and test questions where the questions have various response categories. The book reviews all of the major models and includes discussions about how and where the models…
Descriptors: Guides, Item Response Theory, Test Items, Correlation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zechner, Klaus; Bejar, Isaac I.; Hemat, Ramin – ETS Research Report Series, 2007
The increasing availability and performance of computer-based testing has prompted more research on the automatic assessment of language and speaking proficiency. In this investigation, we evaluated the feasibility of using an off-the-shelf speech-recognition system for scoring speaking prompts from the LanguEdge field test of 2002. We first…
Descriptors: Role, Computer Assisted Testing, Language Proficiency, Oral Language
Dabney, Marian E.; Stewart, Theadora – 1990
This study investigated the construct validity of the revised Special Education-Mental Handicaps Georgia Teacher Certification Test (MH-TCT) using hierarchical confirmatory factor analysis and LISREL VI. The primary objective was to determine whether first-order and second-order factors correspond to item/objective/test relationships defined by…
Descriptors: Computer Assisted Testing, Computer Software, Construct Validity, Content Validity
Stamper, John, Ed.; Pardos, Zachary, Ed.; Mavrikis, Manolis, Ed.; McLaren, Bruce M., Ed. – International Educational Data Mining Society, 2014
The 7th International Conference on Education Data Mining held on July 4th-7th, 2014, at the Institute of Education, London, UK is the leading international forum for high-quality research that mines large data sets in order to answer educational research questions that shed light on the learning process. These data sets may come from the traces…
Descriptors: Information Retrieval, Data Processing, Data Analysis, Data Collection