NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Location
Germany1
Sweden1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 19 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Eray Selçuk; Ergül Demir – International Journal of Assessment Tools in Education, 2024
This research aims to compare the ability and item parameter estimations of Item Response Theory according to Maximum likelihood and Bayesian approaches in different Monte Carlo simulation conditions. For this purpose, depending on the changes in the priori distribution type, sample size, test length, and logistics model, the ability and item…
Descriptors: Item Response Theory, Item Analysis, Test Items, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Alqarni, Abdulelah Mohammed – Journal on Educational Psychology, 2019
This study compares the psychometric properties of reliability in Classical Test Theory (CTT), item information in Item Response Theory (IRT), and validation from the perspective of modern validity theory for the purpose of bringing attention to potential issues that might exist when testing organizations use both test theories in the same testing…
Descriptors: Test Theory, Item Response Theory, Test Construction, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Marcoulides, George A. – Educational and Psychological Measurement, 2016
The frequently neglected and often misunderstood relationship between classical test theory and item response theory is discussed for the unidimensional case with binary measures and no guessing. It is pointed out that popular item response models can be directly obtained from classical test theory-based models by accounting for the discrete…
Descriptors: Test Theory, Item Response Theory, Models, Correlation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Möller, Jens; Müller-Kalthoff, Hanno; Helm, Friederike; Nagy, Nicole; Marsh, Herb W. – Frontline Learning Research, 2016
The dimensional comparison theory (DCT) focuses on the effects of internal, dimensional comparisons (e.g., "How good am I in math compared to English?") on academic self-concepts with widespread consequences for students' self-evaluation, motivation, and behavioral choices. DCT is based on the internal/external frame of reference model…
Descriptors: Comparative Analysis, Comparative Testing, Self Concept, Self Concept Measures
Woodruff, David; Traynor, Anne; Cui, Zhongmin; Fang, Yu – ACT, Inc., 2013
Professional standards for educational testing recommend that both the overall standard error of measurement and the conditional standard error of measurement (CSEM) be computed on the score scale used to report scores to examinees. Several methods have been developed to compute scale score CSEMs. This paper compares three methods, based on…
Descriptors: Comparative Analysis, Error of Measurement, Scores, Scaling
Peer reviewed Peer reviewed
Direct linkDirect link
Puhan, Gautam; Sinharay, Sandip; Haberman, Shelby; Larkin, Kevin – Applied Measurement in Education, 2010
Will subscores provide additional information than what is provided by the total score? Is there a method that can estimate more trustworthy subscores than observed subscores? To answer the first question, this study evaluated whether the true subscore was more accurately predicted by the observed subscore or total score. To answer the second…
Descriptors: Licensing Examinations (Professions), Scores, Computation, Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Wetzel, Eunike; Hell, Benedikt; Passler, Katja – Journal of Career Assessment, 2012
Three test construction strategies are described and illustrated in the development of the Verb Interest Test (VIT), an inventory that assesses vocational interests using verbs. Verbs might be a promising alternative to the descriptions of occupational activities used in most vocational interest inventories because they are context-independent,…
Descriptors: Test Construction, Culture Fair Tests, Vocational Interests, Interest Inventories
Peer reviewed Peer reviewed
Direct linkDirect link
Wiberg, Marie; Sundstrom, Anna – Practical Assessment, Research & Evaluation, 2009
A common problem in predictive validity studies in the educational and psychological fields, e.g. in educational and employment selection, is restriction in range of the predictor variables. There are several methods for correcting correlations for restriction of range. The aim of this paper was to examine the usefulness of two approaches to…
Descriptors: Predictive Validity, Predictor Variables, Correlation, Mathematics
Peer reviewed Peer reviewed
Direct linkDirect link
Wendt, Heike; Bos, Wilfried; Goy, Martin – Educational Research and Evaluation, 2011
Several current international comparative large-scale assessments of educational achievement (ICLSA) make use of "Rasch models", to address functions essential for valid cross-cultural comparisons. From a historical perspective, ICLSA and Georg Rasch's "models for measurement" emerged at about the same time, half a century ago. However, the…
Descriptors: Measures (Individuals), Test Theory, Group Testing, Educational Testing
Eason, Sandra H. – 1989
Generalizability theory provides a technique for accurately estimating the reliability of measurements. The power of this theory is based on the simultaneous analysis of multiple sources of error variances. Equally important, generalizability theory considers relationships among the sources of measurement error. Just as multivariate inferential…
Descriptors: Comparative Analysis, Generalizability Theory, Test Reliability, Test Theory
van den Brink, Wulfert – Evaluation in Education: International Progress, 1982
Binomial models for domain-referenced testing are compared, emphasizing the assumptions underlying the beta-binomial model. Advantages and disadvantages are discussed. A proposed item sampling model is presented which takes the effect of guessing into account. (Author/CM)
Descriptors: Comparative Analysis, Criterion Referenced Tests, Item Sampling, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Wilson, Mark; Allen, Diane D.; Li, Jun Corser – Health Education Research, 2006
This paper compares the approach and resultant outcomes of item response models (IRMs) and classical test theory (CTT). First, it reviews basic ideas of CTT, and compares them to the ideas about using IRMs introduced in an earlier paper. It then applies a comparison scheme based on the AERA/APA/NCME "Standards for Educational and…
Descriptors: Health Education, Self Efficacy, Health Behavior, Measures (Individuals)
Peer reviewed Peer reviewed
Millsap, Roger E.; Everson, Howard – Multivariate Behavioral Research, 1991
Use of confirmatory factor analysis (CFA) with nonzero latent means in testing six different measurement models from classical test theory is discussed. Implications of the six models for observed mean and covariance structures are described, and three examples of the use of CFA in testing the models are presented. (SLD)
Descriptors: Comparative Analysis, Equations (Mathematics), Goodness of Fit, Mathematical Models
Thompson, Bruce; Dennings, Bruce – 1993
Q-technique factor analysis identifies clusters or factors of people, rather than of variables, and has proven very popular, especially with regard to testing typology theories. The present study investigated the utility of three different protocols for obtaining data for Q-technique studies. These three protocols were: (1) a conventional ipsative…
Descriptors: Classification, Comparative Analysis, Data Collection, Factor Analysis
Peer reviewed Peer reviewed
Ramsay, James O. – Psychometrika, 1989
An alternative to the Rasch model is introduced. It characterizes strength of response according to the ratio of ability and difficulty parameters rather than their difference. Joint estimation and marginal estimation models are applied to two test data sets. (SLD)
Descriptors: Ability, Bayesian Statistics, College Entrance Examinations, Comparative Analysis
Previous Page | Next Page »
Pages: 1  |  2