Publication Date
In 2025 | 3 |
Since 2024 | 6 |
Since 2021 (last 5 years) | 9 |
Since 2016 (last 10 years) | 13 |
Descriptor
Computer Assisted Testing | 13 |
Creative Thinking | 13 |
Creativity Tests | 9 |
Scoring | 8 |
Correlation | 5 |
Computer Software | 4 |
Creativity | 4 |
Semantics | 4 |
Artificial Intelligence | 3 |
Foreign Countries | 3 |
Formative Evaluation | 3 |
More ▼ |
Source
Author
Denis Dumas | 3 |
Peter Organisciak | 3 |
Selcuk Acar | 3 |
Forthmann, Boris | 2 |
Kelly Berthiaume | 2 |
Andalibi, Mehran | 1 |
Ardison, Sharon | 1 |
Arnon Hershkovitz | 1 |
Beaty, Roger E. | 1 |
Buczak, Philip | 1 |
Cropley, Arthur | 1 |
More ▼ |
Publication Type
Reports - Research | 11 |
Journal Articles | 10 |
Tests/Questionnaires | 2 |
Dissertations/Theses -… | 1 |
Reports - Descriptive | 1 |
Education Level
Higher Education | 2 |
Postsecondary Education | 2 |
Secondary Education | 2 |
Elementary Education | 1 |
Grade 9 | 1 |
High Schools | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
Torrance Tests of Creative… | 2 |
Program for International… | 1 |
What Works Clearinghouse Rating
Selcuk Acar; Peter Organisciak; Denis Dumas – Journal of Creative Behavior, 2025
In this three-study investigation, we applied various approaches to score drawings created in response to both Form A and Form B of the Torrance Tests of Creative Thinking-Figural (broadly TTCT-F) as well as the Multi-Trial Creative Ideation task (MTCI). We focused on TTCT-F in Study 1, and utilizing a random forest classifier, we achieved 79% and…
Descriptors: Scoring, Computer Assisted Testing, Models, Correlation
Buczak, Philip; Huang, He; Forthmann, Boris; Doebler, Philipp – Journal of Creative Behavior, 2023
Traditionally, researchers employ human raters for scoring responses to creative thinking tasks. Apart from the associated costs this approach entails two potential risks. First, human raters can be subjective in their scoring behavior (inter-rater-variance). Second, individual raters are prone to inconsistent scoring patterns…
Descriptors: Computer Assisted Testing, Scoring, Automation, Creative Thinking
Mathias Benedek; Roger E. Beaty – Journal of Creative Behavior, 2025
The PISA assessment 2022 of creative thinking was a moonshot effort that introduced significant advancements over existing creativity tests, including a broad range of domains (written, visual, social, and scientific), implementation in many languages, and sophisticated scoring methods. PISA 2022 demonstrated the general feasibility of assessing…
Descriptors: Creative Thinking, Creativity, Creativity Tests, Scoring
Eran Hadas; Arnon Hershkovitz – Journal of Learning Analytics, 2025
Creativity is an imperative skill for today's learners, one that has important contributions to issues of inclusion and equity in education. Therefore, assessing creativity is of major importance in educational contexts. However, scoring creativity based on traditional tools suffers from subjectivity and is heavily time- and labour-consuming. This…
Descriptors: Creativity, Evaluation Methods, Computer Assisted Testing, Artificial Intelligence
Peter Organisciak; Selcuk Acar; Denis Dumas; Kelly Berthiaume – Grantee Submission, 2023
Automated scoring for divergent thinking (DT) seeks to overcome a key obstacle to creativity measurement: the effort, cost, and reliability of scoring open-ended tests. For a common test of DT, the Alternate Uses Task (AUT), the primary automated approach casts the problem as a semantic distance between a prompt and the resulting idea in a text…
Descriptors: Automation, Computer Assisted Testing, Scoring, Creative Thinking
Sharmin Söderström – International Journal of Mathematical Education in Science and Technology, 2024
This study focuses on computer-based formative assessment for supporting problem solving and reasoning in mathematics. To be able to assist students who find themselves in difficulties, the software suggested descriptions - diagnoses - of the encountered difficulty the students could choose from. Thereafter, the software provided metacognitive and…
Descriptors: Computer Assisted Testing, Formative Evaluation, Mathematics Instruction, Problem Solving
Selcuk Acar; Denis Dumas; Peter Organisciak; Kelly Berthiaume – Grantee Submission, 2024
Creativity is highly valued in both education and the workforce, but assessing and developing creativity can be difficult without psychometrically robust and affordable tools. The open-ended nature of creativity assessments has made them difficult to score, expensive, often imprecise, and therefore impractical for school- or district-wide use. To…
Descriptors: Thinking Skills, Elementary School Students, Artificial Intelligence, Measurement Techniques
Zamzami Zainuddin – Asia Pacific Education Review, 2024
Increasing student engagement and improving learning outcomes are ongoing issues in higher education worldwide. These issues were particularly pertinent during the COVID-19 pandemic when remote learning was selected as the primary instructional learning setting. This study aims to assess the impact of gamification-based quiz instruction in driving…
Descriptors: Game Based Learning, Distance Education, Videoconferencing, Outcomes of Education
Beaty, Roger E.; Johnson, Dan R.; Zeitlen, Daniel C.; Forthmann, Boris – Creativity Research Journal, 2022
Semantic distance is increasingly used for automated scoring of originality on divergent thinking tasks, such as the Alternate Uses Task (AUT). Despite some psychometric support for semantic distance -- including positive correlations with human creativity ratings -- additional work is needed to optimize its reliability and validity, including…
Descriptors: Semantics, Scoring, Creative Thinking, Creativity
LaVoie, Noelle; Parker, James; Legree, Peter J.; Ardison, Sharon; Kilcullen, Robert N. – Educational and Psychological Measurement, 2020
Automated scoring based on Latent Semantic Analysis (LSA) has been successfully used to score essays and constrained short answer responses. Scoring tests that capture open-ended, short answer responses poses some challenges for machine learning approaches. We used LSA techniques to score short answer responses to the Consequences Test, a measure…
Descriptors: Semantics, Evaluators, Essays, Scoring
Jiajun Guo – ProQuest LLC, 2016
Divergent thinking (DT) tests are the most frequently used types of creativity assessment and have been administered in traditional paper and pencil format for more than a half century. With the prevalence of computer-based testing and increasing demands for large-scale, faster, and more flexible testing procedures, it is necessary to explore and…
Descriptors: Test Construction, Computer Assisted Testing, Creative Thinking, Creativity Tests
Cropley, David; Cropley, Arthur – Educational Technology, 2016
Computer-assisted assessment (CAA) is problematic when it comes to fostering creativity, because in educational thinking the essence of creativity is not finding the correct answer but generating novelty. The idea of "functional" creativity provides rubrics that can serve as the basis for forms of CAA leading to either formative or…
Descriptors: Creativity, Creativity Tests, Formative Evaluation, Computer Assisted Testing
Andalibi, Mehran – International Journal of Higher Education, 2019
In the study herein aimed at incorporating the entrepreneurship mindset early on in the engineering curriculum of undergraduate students via a final project of an introductory programming course with MATLAB. Students were asked to find a need on campus, in the society, or in the market with a business potential and write a standalone application…
Descriptors: Creative Thinking, Engineering Education, Problem Solving, Entrepreneurship