NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 41 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Mahmood Ul Hassan; Frank Miller – Journal of Educational Measurement, 2024
Multidimensional achievement tests are recently gaining more importance in educational and psychological measurements. For example, multidimensional diagnostic tests can help students to determine which particular domain of knowledge they need to improve for better performance. To estimate the characteristics of candidate items (calibration) for…
Descriptors: Multidimensional Scaling, Achievement Tests, Test Items, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Harold Doran; Testsuhiro Yamada; Ted Diaz; Emre Gonulates; Vanessa Culver – Journal of Educational Measurement, 2025
Computer adaptive testing (CAT) is an increasingly common mode of test administration offering improved test security, better measurement precision, and the potential for shorter testing experiences. This article presents a new item selection algorithm based on a generalized objective function to support multiple types of testing conditions and…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Algorithms
Peer reviewed Peer reviewed
Direct linkDirect link
Félix González-Carrasco; Felipe Espinosa Parra; Izaskun Álvarez-Aguado; Sebastián Ponce Olguín; Vanessa Vega Córdova; Miguel Roselló-Peñaloza – British Journal of Learning Disabilities, 2025
Background: The study focuses on the need to optimise assessment scales for support needs in individuals with intellectual and developmental disabilities. Current scales are often lengthy and redundant, leading to exhaustion and response burden. The goal is to use machine learning techniques, specifically item-reduction methods and selection…
Descriptors: Artificial Intelligence, Intellectual Disability, Developmental Disabilities, Individual Needs
Peer reviewed Peer reviewed
Direct linkDirect link
Guher Gorgun; Okan Bulut – Education and Information Technologies, 2024
In light of the widespread adoption of technology-enhanced learning and assessment platforms, there is a growing demand for innovative, high-quality, and diverse assessment questions. Automatic Question Generation (AQG) has emerged as a valuable solution, enabling educators and assessment developers to efficiently produce a large volume of test…
Descriptors: Computer Assisted Testing, Test Construction, Test Items, Automation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zsoldos-Marchi?, Iuliana; Bálint-Svella, Éva – Acta Didactica Napocensia, 2023
The concept, development and assessment of computational thinking have increasingly become the focus of research in recent years. Most of this type of research focuses on older children or adults. Preschool age is a sensitive period when many skills develop intensively, so the development of computational thinking skills can already begin at this…
Descriptors: Test Construction, Computation, Thinking Skills, Cognitive Tests
Peer reviewed Peer reviewed
Direct linkDirect link
D. Steger; S. Weiss; O. Wilhelm – Creativity Research Journal, 2023
Creativity can be measured with a variety of methods including self-reports, others reports, and ability tests. While typical self-reports are best understood as weak proxies of creativity, biographical reports that assess previous creative activities seem more promising. Drawbacks of such measures -- including skewed item distributions, a lack of…
Descriptors: Creativity, Creativity Tests, Test Construction, Algorithms
Peer reviewed Peer reviewed
Direct linkDirect link
Wim J. van der Linden; Luping Niu; Seung W. Choi – Journal of Educational and Behavioral Statistics, 2024
A test battery with two different levels of adaptation is presented: a within-subtest level for the selection of the items in the subtests and a between-subtest level to move from one subtest to the next. The battery runs on a two-level model consisting of a regular response model for each of the subtests extended with a second level for the joint…
Descriptors: Adaptive Testing, Test Construction, Test Format, Test Reliability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Emre Zengin; Yasemin Karal – International Journal of Assessment Tools in Education, 2024
This study was carried out to develop a test to assess algorithmic thinking skills. To this end, the twelve steps suggested by Downing (2006) were adopted. Throughout the test development, 24 middle school sixth-grade students and eight experts in different areas took part as needed in the tasks on the project. The test was given to 252 students…
Descriptors: Grade 6, Algorithms, Thinking Skills, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Silvia Wen-Yu Lee; Jyh-Chong Liang; Chung-Yuan Hsu; Meng-Jung Tsai – Interactive Learning Environments, 2024
While research has shown that students' epistemic beliefs can be a strong predictor of their academic performance, cognitive abilities, or self-efficacy, studies of this topic in computer education are rare. The purpose of this study was twofold. First, it aimed to validate a newly developed questionnaire for measuring students' epistemic beliefs…
Descriptors: Student Attitudes, Beliefs, Computer Science Education, Programming
Yunxiao Chen; Xiaoou Li; Jingchen Liu; Gongjun Xu; Zhiliang Ying – Grantee Submission, 2017
Large-scale assessments are supported by a large item pool. An important task in test development is to assign items into scales that measure different characteristics of individuals, and a popular approach is cluster analysis of items. Classical methods in cluster analysis, such as the hierarchical clustering, K-means method, and latent-class…
Descriptors: Item Analysis, Classification, Graphs, Test Items
Peer reviewed Peer reviewed
Wu, Ing-Long – Journal of Educational and Behavioral Statistics, 2001
Presents two binary programming models with a special network structure that can be explored computationally for simultaneous test construction. Uses an efficient special purpose network algorithm to solve these models. An empirical study illustrates the approach. (SLD)
Descriptors: Algorithms, Computer Software, Networks, Test Construction
Peer reviewed Peer reviewed
Leucht, Richard M. – Applied Psychological Measurement, 1998
Presents a variation of a "greedy" algorithm that can be used in test-assembly problems. The algorithm, the normalized weighted absolute-deviation heuristic, selects items to have a locally optimal fit to a moving set of average criterion values. Demonstrates application of the model. (SLD)
Descriptors: Algorithms, Computer Assisted Testing, Criteria, Heuristics
Peer reviewed Peer reviewed
Sanders, Piet F.; Verschoor, Alfred J. – Applied Psychological Measurement, 1998
Presents minimization and maximization models for parallel test construction under constraints. The minimization model constructs weakly and strongly parallel tests of minimum length, while the maximization model constructs weakly and strongly parallel tests with maximum test reliability. (Author/SLD)
Descriptors: Algorithms, Models, Reliability, Test Construction
Peer reviewed Peer reviewed
van der Linden, Wim J.; Adema, Jos J. – Journal of Educational Measurement, 1998
Proposes an algorithm for the assembly of multiple test forms in which the multiple-form problem is reduced to a series of computationally less intensive two-form problems. Illustrates how the method can be implemented using 0-1 linear programming and gives two examples. (SLD)
Descriptors: Algorithms, Linear Programming, Test Construction, Test Format
Peer reviewed Peer reviewed
Millman, Jason; Westman, Ronald S. – Journal of Educational Measurement, 1989
Five approaches to writing test items with computer assistance are described. A model of knowledge using a set of structures and a system for implementing the scheme are outlined. The approaches include the author-supplied approach, replacement-set procedures, computer-supplied prototype items, subject-matter mapping, and discourse analysis. (TJH)
Descriptors: Achievement Tests, Algorithms, Computer Assisted Testing, Test Construction
Previous Page | Next Page »
Pages: 1  |  2  |  3