Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 4 |
Since 2016 (last 10 years) | 5 |
Since 2006 (last 20 years) | 6 |
Descriptor
Automation | 6 |
Creativity Tests | 6 |
Creative Thinking | 4 |
Scoring | 3 |
Computer Assisted Testing | 2 |
Creativity | 2 |
Natural Language Processing | 2 |
Semantics | 2 |
Test Items | 2 |
Accuracy | 1 |
Algorithms | 1 |
More ▼ |
Source
Creativity Research Journal | 2 |
Journal of Creative Behavior | 2 |
Gifted Child Quarterly | 1 |
Grantee Submission | 1 |
Author
Denis Dumas | 2 |
Kelly Berthiaume | 2 |
Peter Organisciak | 2 |
Selcuk Acar | 2 |
Badia, Toni | 1 |
Buczak, Philip | 1 |
Charles Flemister | 1 |
Christoph Meinel | 1 |
Corinna Jaschek | 1 |
Doebler, Philipp | 1 |
Dumas, Denis | 1 |
More ▼ |
Publication Type
Reports - Research | 6 |
Journal Articles | 5 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Remote Associates Test | 1 |
Torrance Tests of Creative… | 1 |
What Works Clearinghouse Rating
Corinna Jaschek; Julia von Thienen; Kim-Pascal Borchart; Christoph Meinel – Creativity Research Journal, 2023
The automation of creativity measurement is a promising avenue of development, given that classic creativity assessments face challenges such as resource-intensive expert judgments, subjective creativity ratings, and biases in people's self-reports. In this paper, we present a construct validation study for CollaboUse, a test developed to deliver…
Descriptors: Automation, Creativity Tests, Cooperation, Construct Validity
Buczak, Philip; Huang, He; Forthmann, Boris; Doebler, Philipp – Journal of Creative Behavior, 2023
Traditionally, researchers employ human raters for scoring responses to creative thinking tasks. Apart from the associated costs this approach entails two potential risks. First, human raters can be subjective in their scoring behavior (inter-rater-variance). Second, individual raters are prone to inconsistent scoring patterns…
Descriptors: Computer Assisted Testing, Scoring, Automation, Creative Thinking
Selcuk Acar; Kelly Berthiaume; Katalin Grajzel; Denis Dumas; Charles Flemister; Peter Organisciak – Gifted Child Quarterly, 2023
In this study, we applied different text-mining methods to the originality scoring of the Unusual Uses Test (UUT) and Just Suppose Test (JST) from the Torrance Tests of Creative Thinking (TTCT)--Verbal. Responses from 102 and 123 participants who completed Form A and Form B, respectively, were scored using three different text-mining methods. The…
Descriptors: Creative Thinking, Creativity Tests, Scoring, Automation
Maio, Shannon; Dumas, Denis; Organisciak, Peter; Runco, Mark – Creativity Research Journal, 2020
In recognition of the capability of text-mining models to quantify aspects of language use, some creativity researchers have adopted text-mining models as a mechanism to objectively and efficiently score the Originality of open-ended responses to verbal divergent thinking tasks. With the increasing use of text-mining models in divergent thinking…
Descriptors: Creative Thinking, Scores, Reliability, Data Analysis
Peter Organisciak; Selcuk Acar; Denis Dumas; Kelly Berthiaume – Grantee Submission, 2023
Automated scoring for divergent thinking (DT) seeks to overcome a key obstacle to creativity measurement: the effort, cost, and reliability of scoring open-ended tests. For a common test of DT, the Alternate Uses Task (AUT), the primary automated approach casts the problem as a semantic distance between a prompt and the resulting idea in a text…
Descriptors: Automation, Computer Assisted Testing, Scoring, Creative Thinking
Klein, Ariel; Badia, Toni – Journal of Creative Behavior, 2015
In this study we show how complex creative relations can arise from fairly frequent semantic relations observed in everyday language. By doing this, we reflect on some key cognitive aspects of linguistic and general creativity. In our experimentation, we automated the process of solving a battery of Remote Associates Test tasks. By applying…
Descriptors: Language Usage, Semantics, Natural Language Processing, Test Items