Publication Date
In 2025 | 1 |
Since 2024 | 3 |
Since 2021 (last 5 years) | 16 |
Since 2016 (last 10 years) | 29 |
Since 2006 (last 20 years) | 43 |
Descriptor
Source
Author
Ueno, Maomi | 4 |
Chun Wang | 2 |
Ishii, Takatoshi | 2 |
van der Linden, Wim J. | 2 |
Adekunle Ibrahim Oladejo | 1 |
Alario-Hoyos, Carlos | 1 |
Ames, Allison J. | 1 |
Anders Sjöberg | 1 |
Ariel, Adelaide | 1 |
Atar, Hakan Yavuz | 1 |
Au, Chi Hang | 1 |
More ▼ |
Publication Type
Education Level
Audience
Location
Brazil | 2 |
Germany | 2 |
Netherlands | 2 |
Nigeria | 2 |
Taiwan | 2 |
Uruguay | 2 |
Afghanistan | 1 |
Finland | 1 |
France | 1 |
Illinois (Chicago) | 1 |
Japan | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
National Education… | 1 |
Program for International… | 1 |
Raven Progressive Matrices | 1 |
Rosenberg Self Esteem Scale | 1 |
Test of English for… | 1 |
What Works Clearinghouse Rating
Erik Forsberg; Anders Sjöberg – Measurement: Interdisciplinary Research and Perspectives, 2025
This paper reports a validation study based on descriptive multidimensional item response theory (DMIRT), implemented in the R package "D3mirt" by using the ERS-C, an extended version of the Relevance subscale from the Moral Foundations Questionnaire including two new items for collectivism (17 items in total). Two latent models are…
Descriptors: Evaluation Methods, Programming Languages, Altruism, Collectivism
Chenchen Ma; Jing Ouyang; Chun Wang; Gongjun Xu – Grantee Submission, 2024
Survey instruments and assessments are frequently used in many domains of social science. When the constructs that these assessments try to measure become multifaceted, multidimensional item response theory (MIRT) provides a unified framework and convenient statistical tool for item analysis, calibration, and scoring. However, the computational…
Descriptors: Algorithms, Item Response Theory, Scoring, Accuracy
Scharl, Anna; Zink, Eva – Large-scale Assessments in Education, 2022
Educational large-scale assessments (LSAs) often provide plausible values for the administered competence tests to facilitate the estimation of population effects. This requires the specification of a background model that is appropriate for the specific research question. Because the "German National Educational Panel Study" (NEPS) is…
Descriptors: National Competency Tests, Foreign Countries, Programming Languages, Longitudinal Studies
Kilic, Abdullah Faruk; Uysal, Ibrahim – International Journal of Assessment Tools in Education, 2022
Most researchers investigate the corrected item-total correlation of items when analyzing item discrimination in multi-dimensional structures under the Classical Test Theory, which might lead to underestimating item discrimination, thereby removing items from the test. Researchers might investigate the corrected item-total correlation with the…
Descriptors: Item Analysis, Correlation, Item Response Theory, Test Items
Lai, Rina PY; Ellefson, Michelle R. – Journal of Educational Computing Research, 2023
Computational thinking (CT) is an emerging and multifaceted competence important to the computing era. However, despite the growing consensus that CT is a competence domain, its theoretical and empirical account remain scarce in the current literature. To address this issue, rigorous psychometric evaluation procedures were adopted to investigate…
Descriptors: Computation, Thinking Skills, Competence, Psychometrics
Aydin, Muharrem; Karal, Hasan; Nabiyev, Vasif – Education and Information Technologies, 2023
This study aims to examine adaptability for educational games in terms of adaptation elements, components used in creating user profiles, and decision algorithms used for adaptation. For this purpose, articles and full-text papers in Web of Science, Google Scholar, and Eric databases between 2000-2021 were searched using the keywords…
Descriptors: Educational Games, Game Based Learning, Programming, Physics
Fuchimoto, Kazuma; Ishii, Takatoshi; Ueno, Maomi – IEEE Transactions on Learning Technologies, 2022
Educational assessments often require uniform test forms, for which each test form has equivalent measurement accuracy but with a different set of items. For uniform test assembly, an important issue is the increase of the number of assembled uniform tests. Although many automatic uniform test assembly methods exist, the maximum clique algorithm…
Descriptors: Simulation, Efficiency, Test Items, Educational Assessment
Partchev, Ivailo – Journal of Intelligence, 2020
We analyze a 12-item version of Raven's Standard Progressive Matrices test, traditionally scored with the sum score. We discuss some important differences between assessment in practice and psychometric modelling. We demonstrate some advanced diagnostic tools in the freely available R package, dexter. We find that the first item in the test…
Descriptors: Intelligence Tests, Scores, Psychometrics, Diagnostic Tests
Computerized Adaptive Assessment of Understanding of Programming Concepts in Primary School Children
Hogenboom, Sally A. M.; Hermans, Felienne F. J.; Van der Maas, Han L. J. – Computer Science Education, 2022
Background and Context: Valid assessment of understanding of programming concepts in primary school children is essential to implement and improve programming education. Objective: We developed and validated the Computerized Adaptive Programming Concepts Test (CAPCT) with a novel application of Item Response Theory. The CAPCT is a web-based and…
Descriptors: Computer Assisted Testing, Adaptive Testing, Programming, Knowledge Level
Musa Adekunle Ayanwale; Jamiu Oluwadamilare Amusa; Adekunle Ibrahim Oladejo; Funmilayo Ayedun – Interchange: A Quarterly Review of Education, 2024
The study focuses on assessing the proficiency levels of higher education students, specifically the physics achievement test (PHY 101) at the National Open University of Nigeria (NOUN). This test, like others, evaluates various aspects of knowledge and skills simultaneously. However, relying on traditional models for such tests can result in…
Descriptors: Item Response Theory, Difficulty Level, Item Analysis, Test Items
Saatcioglu, Fatima Munevver; Atar, Hakan Yavuz – International Journal of Assessment Tools in Education, 2022
This study aims to examine the effects of mixture item response theory (IRT) models on item parameter estimation and classification accuracy under different conditions. The manipulated variables of the simulation study are set as mixture IRT models (Rasch, 2PL, 3PL); sample size (600, 1000); the number of items (10, 30); the number of latent…
Descriptors: Accuracy, Classification, Item Response Theory, Programming Languages
Xue Zhang; Chun Wang – Grantee Submission, 2021
Among current state-of-art estimation methods for multilevel IRT models, the two-stage divide-and-conquer strategy has practical advantages, such as clearer definition of factors, convenience for secondary data analysis, convenience for model calibration and fit evaluation, and avoidance of improper solutions. However, various studies have shown…
Descriptors: Error of Measurement, Error Correction, Item Response Theory, Comparative Analysis
Ayanwale, Musa Adekunle; Ndlovu, Mdutshekelwa – Education Sciences, 2021
This study investigated the scalability of a cognitive multiple-choice test through the Mokken package in the R programming language for statistical computing. A 2019 mathematics West African Examinations Council (WAEC) instrument was used to gather data from randomly drawn K-12 participants (N = 2866; Male = 1232; Female = 1634; Mean age = 16.5…
Descriptors: Cognitive Tests, Multiple Choice Tests, Scaling, Test Items
Padgett, R. Noah; Morgan, Grant B. – Measurement: Interdisciplinary Research and Perspectives, 2020
The "extended Rasch modeling" (eRm) package in R provides users with a comprehensive set of tools for Rasch modeling for scale evaluation and general modeling. We provide a brief introduction to Rasch modeling followed by a review of literature that utilizes the eRm package. Then, the key features of the eRm package for scale evaluation…
Descriptors: Computer Software, Programming Languages, Self Esteem, Self Concept Measures
Barrett, Michelle D.; van der Linden, Wim J. – Journal of Educational Measurement, 2017
Linking functions adjust for differences between identifiability restrictions used in different instances of the estimation of item response model parameters. These adjustments are necessary when results from those instances are to be compared. As linking functions are derived from estimated item response model parameters, parameter estimation…
Descriptors: Item Response Theory, Error of Measurement, Programming, Evaluation Methods