NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Selcuk Acar; Denis Dumas; Peter Organisciak; Kelly Berthiaume – Grantee Submission, 2024
Creativity is highly valued in both education and the workforce, but assessing and developing creativity can be difficult without psychometrically robust and affordable tools. The open-ended nature of creativity assessments has made them difficult to score, expensive, often imprecise, and therefore impractical for school- or district-wide use. To…
Descriptors: Thinking Skills, Elementary School Students, Artificial Intelligence, Measurement Techniques
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Goecke, Benjamin; Schmitz, Florian; Wilhelm, Oliver – Journal of Intelligence, 2021
Performance in elementary cognitive tasks is moderately correlated with fluid intelligence and working memory capacity. These correlations are higher for more complex tasks, presumably due to increased demands on working memory capacity. In accordance with the binding hypothesis, which states that working memory capacity reflects the limit of a…
Descriptors: Intelligence, Cognitive Processes, Short Term Memory, Reaction Time
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Sinharay, Sandip – ETS Research Report Series, 2015
The "e-rater"® automated essay scoring system is used operationally in the scoring of "TOEFL iBT"® independent and integrated tasks. In this study we explored the psychometric added value of reporting four trait scores for each of these two tasks, beyond the total e-rater score.The four trait scores are word choice, grammatical…
Descriptors: Writing Tests, Scores, Language Tests, English (Second Language)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Sinharay, Sandip – ETS Research Report Series, 2015
The "e-rater"® automated essay scoring system is used operationally in the scoring of the argument and issue tasks that form the Analytical Writing measure of the "GRE"® General Test. For each of these tasks, this study explored the value added of reporting 4 trait scores for each of these 2 tasks over the total e-rater score.…
Descriptors: Scores, Computer Assisted Testing, Computer Software, Grammar
Quinlan, Thomas; Higgins, Derrick; Wolff, Susanne – Educational Testing Service, 2009
This report evaluates the construct coverage of the e-rater[R[ scoring engine. The matter of construct coverage depends on whether one defines writing skill, in terms of process or product. Originally, the e-rater engine consisted of a large set of components with a proven ability to predict human holistic scores. By organizing these capabilities…
Descriptors: Guides, Writing Skills, Factor Analysis, Writing Tests
Hsu, Huei-Lien – ProQuest LLC, 2012
By centralizing the issue of test fairness in language proficiency assessments, this study responds to a call by researchers for developing greater social responsibility in the language testing agenda. As inquiries into language attitude and psychology indicate, there is an underlying uncertainty pertaining to the validity of test use and score…
Descriptors: Language Variation, English (Second Language), Second Language Learning, Mixed Methods Research
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal – ETS Research Report Series, 2007
This study examined the construct validity of the "e-rater"® automated essay scoring engine as an alternative to human scoring in the context of TOEFL® essay writing. Analyses were based on a sample of students who repeated the TOEFL within a short time period. Two "e-rater" scores were investigated in this study, the first…
Descriptors: Construct Validity, Computer Assisted Testing, Scoring, English (Second Language)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zechner, Klaus; Bejar, Isaac I.; Hemat, Ramin – ETS Research Report Series, 2007
The increasing availability and performance of computer-based testing has prompted more research on the automatic assessment of language and speaking proficiency. In this investigation, we evaluated the feasibility of using an off-the-shelf speech-recognition system for scoring speaking prompts from the LanguEdge field test of 2002. We first…
Descriptors: Role, Computer Assisted Testing, Language Proficiency, Oral Language
Peer reviewed Peer reviewed
Endler, Norman S.; Parker, James D. A. – Educational and Psychological Measurement, 1990
C. Davis and M. Cowles (1989) analyzed a total trait anxiety score on the Endler Multidimensional Anxiety Scales (EMAS)--a unidimensional construct that this multidimensional measure does not assess. Data are reanalyzed using the appropriate scoring procedure for the EMAS. Subjects included 145 undergraduates in 1 of 4 testing conditions. (SLD)
Descriptors: Anxiety, Comparative Testing, Computer Assisted Testing, Construct Validity
Cohen, Allan S., Comp. – 1979
This partially annotated bibliography of journal articles, dissertations, convention papers, research reports, and a few books and unpublished manuscripts provides a comprehensive coverage of work on latent trait theory and practice. Documents are arranged alphabetically by author. The period covered ranges from the early 1950's to the present.…
Descriptors: Attitude Measures, Career Development, Computer Assisted Testing, Computer Programs