Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 5 |
Descriptor
Automation | 5 |
Creativity Tests | 5 |
Creative Thinking | 3 |
Creativity | 2 |
Scoring | 2 |
Accuracy | 1 |
Algorithms | 1 |
Artificial Intelligence | 1 |
Computer Assisted Testing | 1 |
Construct Validity | 1 |
Cooperation | 1 |
More ▼ |
Author
Badia, Toni | 1 |
Buczak, Philip | 1 |
Charles Flemister | 1 |
Christoph Meinel | 1 |
Corinna Jaschek | 1 |
Denis Dumas | 1 |
Doebler, Philipp | 1 |
Dumas, Denis | 1 |
Forthmann, Boris | 1 |
Huang, He | 1 |
Julia von Thienen | 1 |
More ▼ |
Publication Type
Journal Articles | 5 |
Reports - Research | 5 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Remote Associates Test | 1 |
Torrance Tests of Creative… | 1 |
What Works Clearinghouse Rating
Corinna Jaschek; Julia von Thienen; Kim-Pascal Borchart; Christoph Meinel – Creativity Research Journal, 2023
The automation of creativity measurement is a promising avenue of development, given that classic creativity assessments face challenges such as resource-intensive expert judgments, subjective creativity ratings, and biases in people's self-reports. In this paper, we present a construct validation study for CollaboUse, a test developed to deliver…
Descriptors: Automation, Creativity Tests, Cooperation, Construct Validity
Buczak, Philip; Huang, He; Forthmann, Boris; Doebler, Philipp – Journal of Creative Behavior, 2023
Traditionally, researchers employ human raters for scoring responses to creative thinking tasks. Apart from the associated costs this approach entails two potential risks. First, human raters can be subjective in their scoring behavior (inter-rater-variance). Second, individual raters are prone to inconsistent scoring patterns…
Descriptors: Computer Assisted Testing, Scoring, Automation, Creative Thinking
Selcuk Acar; Kelly Berthiaume; Katalin Grajzel; Denis Dumas; Charles Flemister; Peter Organisciak – Gifted Child Quarterly, 2023
In this study, we applied different text-mining methods to the originality scoring of the Unusual Uses Test (UUT) and Just Suppose Test (JST) from the Torrance Tests of Creative Thinking (TTCT)--Verbal. Responses from 102 and 123 participants who completed Form A and Form B, respectively, were scored using three different text-mining methods. The…
Descriptors: Creative Thinking, Creativity Tests, Scoring, Automation
Maio, Shannon; Dumas, Denis; Organisciak, Peter; Runco, Mark – Creativity Research Journal, 2020
In recognition of the capability of text-mining models to quantify aspects of language use, some creativity researchers have adopted text-mining models as a mechanism to objectively and efficiently score the Originality of open-ended responses to verbal divergent thinking tasks. With the increasing use of text-mining models in divergent thinking…
Descriptors: Creative Thinking, Scores, Reliability, Data Analysis
Klein, Ariel; Badia, Toni – Journal of Creative Behavior, 2015
In this study we show how complex creative relations can arise from fairly frequent semantic relations observed in everyday language. By doing this, we reflect on some key cognitive aspects of linguistic and general creativity. In our experimentation, we automated the process of solving a battery of Remote Associates Test tasks. By applying…
Descriptors: Language Usage, Semantics, Natural Language Processing, Test Items