NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 17 results Save | Export
Klauth, Bo – ProQuest LLC, 2023
In conducting confirmatory factor analysis with ordered response items, the literature suggests that when the number of responses is five and item skewness (IS) is approximately normal, researchers can employ maximum likelihood with robust standard errors (MLR). However, MLR can yield biased factor loadings (FL) and FL standard errors (FLSE) when…
Descriptors: Item Response Theory, Evaluation Methods, Factor Analysis, Error of Measurement
Sophie Lilit Litschwartz – ProQuest LLC, 2021
In education research test scores are a common object of analysis. Across studies test scores can be an important outcome, a highly predictive covariate, or a means of assigning treatment. However, test scores are a measure of an underlying proficiency we can't observe directly and so contain error. This measurement error has implications for how…
Descriptors: Scores, Inferences, Educational Research, Evaluation Methods
Lotfi Simon Kerzabi – ProQuest LLC, 2021
Monte Carlo methods are an accepted methodology in regards to generation critical values for a Maximum test. The same methods are also applicable to the evaluation of the robustness of the new created test. A table of critical values was created, and the robustness of the new maximum test was evaluated for five different distributions. Robustness…
Descriptors: Data, Monte Carlo Methods, Testing, Evaluation Research
Hosseinzadeh, Mostafa – ProQuest LLC, 2021
In real-world situations, multidimensional data may appear on large-scale tests or attitudinal surveys. A simple structure, multidimensional model may be used to evaluate the items, ignoring the cross-loading of some items on the secondary dimension. The purpose of this study was to investigate the influence of structure complexity magnitude of…
Descriptors: Item Response Theory, Models, Simulation, Evaluation Methods
Wenjing Guo – ProQuest LLC, 2021
Constructed response (CR) items are widely used in large-scale testing programs, including the National Assessment of Educational Progress (NAEP) and many district and state-level assessments in the United States. One unique feature of CR items is that they depend on human raters to assess the quality of examinees' work. The judgment of human…
Descriptors: National Competency Tests, Responses, Interrater Reliability, Error of Measurement
Greifer, Noah – ProQuest LLC, 2018
There has been some research in the use of propensity scores in the context of measurement error in the confounding variables; one recommended method is to generate estimates of the mis-measured covariate using a latent variable model, and to use those estimates (i.e., factor scores) in place of the covariate. I describe a simulation study…
Descriptors: Evaluation Methods, Probability, Scores, Statistical Analysis
Hyunsuk Han – ProQuest LLC, 2018
In Huggins-Manley & Han (2017), it was shown that WLSMV global model fit indices used in structural equating modeling practice are sensitive to person parameter estimate RMSE and item difficulty parameter estimate RMSE that results from local dependence in 2-PL IRT models, particularly when conditioning on number of test items and sample size.…
Descriptors: Models, Statistical Analysis, Item Response Theory, Evaluation Methods
Gulsah Gurkan – ProQuest LLC, 2021
Secondary analyses of international large-scale assessments (ILSA) commonly characterize relationships between variables of interest using correlations. However, the accuracy of correlation estimates is impaired by artefacts such as measurement error and clustering. Despite advancements in methodology, conventional correlation estimates or…
Descriptors: Secondary School Students, Achievement Tests, International Assessment, Foreign Countries
Leventhal, Brian – ProQuest LLC, 2017
More robust and rigorous psychometric models, such as multidimensional Item Response Theory models, have been advocated for survey applications. However, item responses may be influenced by construct-irrelevant variance factors such as preferences for extreme response options. Through empirical and simulation methods, this study evaluates the use…
Descriptors: Psychometrics, Item Response Theory, Simulation, Models
Spencer, Bryden – ProQuest LLC, 2016
Value-added models are a class of growth models used in education to assign responsibility for student growth to teachers or schools. For value-added models to be used fairly, sufficient statistical precision is necessary for accurate teacher classification. Previous research indicated precision below practical limits. An alternative approach has…
Descriptors: Monte Carlo Methods, Comparative Analysis, Accuracy, High Stakes Tests
Bond, William Glenn – ProQuest LLC, 2012
In this paper, I propose to demonstrate a means of error estimation preprocessing in the assembly of overlapping aerial image mosaics. The mosaic program automatically assembles several hundred aerial images from a data set by aligning them, via image registration using a pattern search method, onto a GIS grid. The method presented first locates…
Descriptors: Error Patterns, Error of Measurement, Information Science, Geographic Information Systems
Domingue, Benjamin Webre – ProQuest LLC, 2012
In psychometrics, it is difficult to verify that measurement instruments can be used to produce numeric values with the desirable property that differences between units are equal-interval because the attributes being measured are latent. The theory of additive conjoint measurement (e.g., Krantz, Luce, Suppes, & Tversky, 1971, ACM) guarantees…
Descriptors: Psychometrics, Evaluation Methods, Error of Measurement, Intervals
Cheema, Jehanzeb – ProQuest LLC, 2012
This study looked at the effect of a number of factors such as the choice of analytical method, the handling method for missing data, sample size, and proportion of missing data, in order to evaluate the effect of missing data treatment on accuracy of estimation. In order to accomplish this a methodological approach involving simulated data was…
Descriptors: Educational Research, Educational Researchers, Statistical Analysis, Sample Size
Mbella, Kinge Keka – ProQuest LLC, 2012
Mixed-format assessments are increasingly being used in large scale standardized assessments to measure a continuum of skills ranging from basic recall to higher order thinking skills. These assessments are usually comprised of a combination of (a) multiple-choice items which can be efficiently scored, have stable psychometric properties, and…
Descriptors: Educational Assessment, Test Format, Evaluation Methods, Multiple Choice Tests
Diakow, Ronli Phyllis – ProQuest LLC, 2013
This dissertation comprises three papers that propose, discuss, and illustrate models to make improved inferences about research questions regarding student achievement in education. Addressing the types of questions common in educational research today requires three different "extensions" to traditional educational assessment: (1)…
Descriptors: Inferences, Educational Assessment, Academic Achievement, Educational Research
Previous Page | Next Page ยป
Pages: 1  |  2