NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Does not meet standards1
Showing 1,576 to 1,590 of 3,295 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Maris, Gunter; Schmittmann, Verena D.; Borsboom, Denny – Measurement: Interdisciplinary Research and Perspectives, 2010
Test equating under the NEAT design is, at best, a necessary evil. At bottom, the procedure aims to reach a conclusion on what a tested person would have done, if he or she were administered a set of items that were in fact never administered. It is not possible to infer such a conclusion from the data, because one simply has not made the required…
Descriptors: Equated Scores, Inferences, Item Response Theory, Error of Measurement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Cornejo, Felipe A.; Castillo, Ramon D.; Saavedra, Maria A.; Vogel, Edgar H. – Psicologica: International Journal of Methodology and Experimental Psychology, 2010
Considerable research has examined the contrasting predictions of configural and elemental associative accounts of learning. One of the simplest methods to distinguish between these approaches is the summation test, in which the associative strength of a novel compound (AB) made of two separately-trained cues (A+ and B+) is examined. The…
Descriptors: Animals, Cues, Classical Conditioning, Prediction
Peer reviewed Peer reviewed
Direct linkDirect link
Yang-Wallentin, Fan; Joreskog, Karl G.; Luo, Hao – Structural Equation Modeling: A Multidisciplinary Journal, 2010
Ordinal variables are common in many empirical investigations in the social and behavioral sciences. Researchers often apply the maximum likelihood method to fit structural equation models to ordinal data. This assumes that the observed measures have normal distributions, which is not the case when the variables are ordinal. A better approach is…
Descriptors: Structural Equation Models, Factor Analysis, Least Squares Statistics, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Chang, Yuan-chin Ivan; Lu, Hung-Yi – Psychometrika, 2010
Item calibration is an essential issue in modern item response theory based psychological or educational testing. Due to the popularity of computerized adaptive testing, methods to efficiently calibrate new items have become more important than that in the time when paper and pencil test administration is the norm. There are many calibration…
Descriptors: Test Items, Educational Testing, Adaptive Testing, Measurement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ghafournia, Narjes; Afghari, Akbar – English Language Teaching, 2013
The study scrutinized the probable interaction between using cognitive test-taking strategies, reading proficiency, and reading comprehension test performance of Iranian postgraduate students, who studied English as a foreign language. The study also probed the extent to which the participants' test performance was related to the use of certain…
Descriptors: Foreign Countries, Reading Comprehension, Reading Tests, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Rhemtulla, Mijke; Brosseau-Liard, Patricia E.; Savalei, Victoria – Psychological Methods, 2012
A simulation study compared the performance of robust normal theory maximum likelihood (ML) and robust categorical least squares (cat-LS) methodology for estimating confirmatory factor analysis models with ordinal variables. Data were generated from 2 models with 2-7 categories, 4 sample sizes, 2 latent distributions, and 5 patterns of category…
Descriptors: Factor Analysis, Computation, Simulation, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Parker, Richard I.; Vannest, Kimberly J.; Davis, John L.; Clemens, Nathan H. – Journal of Special Education, 2012
Within a response to intervention model, educators increasingly use progress monitoring (PM) to support medium- to high-stakes decisions for individual students. For PM to serve these more demanding decisions requires more careful consideration of measurement error. That error should be calculated within a fixed linear regression model rather than…
Descriptors: Measurement, Computation, Response to Intervention, Regression (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Pae, Hye K.; Greenberg, Daphne; Morris, Robin D. – Language Assessment Quarterly, 2012
The aim of this study was to apply the Rasch model to an analysis of the psychometric properties of the Peabody Picture Vocabulary Test--III Form A (PPVT--IIIA) items with struggling adult readers. The PPVT--IIIA was administered to 229 African American adults whose isolated word reading skills were between third and fifth grades. Conformity of…
Descriptors: African Americans, Test Items, Construct Validity, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Webber, Douglas A. – Economics of Education Review, 2012
Using detailed individual-level data from public universities in the state of Ohio, I estimate the effect of various institutional expenditures on the probability of graduating from college. Using a competing risks regression framework, I find differential impacts of expenditure categories across student characteristics. I estimate that student…
Descriptors: Student Characteristics, Educational Finance, Measurement, Probability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
National Center for Education Statistics, 2015
In 2011-12, graduate students received a total of $51.7 billion in federal loans and grants, institutional grants, employer support, and financial aid from other sources. In 2007-08, this figure was $36.7 billion (College Board 2008, 2012). The data presented in these Web Tables were collected through five administrations of the National…
Descriptors: Trend Analysis, Graduate Students, Federal Aid, Student Financial Aid
Chon, Kyong Hee – ProQuest LLC, 2009
The purpose of this study was to investigate procedures for assessing model fit of IRT models for mixed format data. In this study, various IRT model combinations were fitted to data containing both dichotomous and polytomous item responses, and the suitability of the chosen model mixtures was evaluated based on a number of model fit procedures.…
Descriptors: Item Response Theory, Test Items, Goodness of Fit, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Bollen, Kenneth A.; Davis, Walter R. – Structural Equation Modeling: A Multidisciplinary Journal, 2009
We discuss the identification, estimation, and testing of structural equation models that have causal indicators. We first provide 2 rules of identification that are particularly helpful in models with causal indicators--the 2C emitted paths rule and the exogenous X rule. We demonstrate how these rules can help us distinguish identified from…
Descriptors: Structural Equation Models, Testing, Identification, Statistical Significance
Peer reviewed Peer reviewed
Direct linkDirect link
Andru, Peter; Botchkarev, Alexei – Journal of MultiDisciplinary Evaluation, 2011
Background: Return on investment (ROI) is one of the most popular evaluation metrics. ROI analysis (when applied correctly) is a powerful tool of evaluating existing information systems and making informed decisions on the acquisitions. However, practical use of the ROI is complicated by a number of uncertainties and controversies. The article…
Descriptors: Outcomes of Education, Information Systems, School Business Officials, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Woods, Carol M. – Applied Psychological Measurement, 2011
Differential item functioning (DIF) occurs when an item on a test, questionnaire, or interview has different measurement properties for one group of people versus another, irrespective of true group-mean differences on the constructs being measured. This article is focused on item response theory based likelihood ratio testing for DIF (IRT-LR or…
Descriptors: Simulation, Item Response Theory, Testing, Questionnaires
Kane, Michael – Educational Testing Service, 2010
The 12th annual William H. Angoff Memorial Lecture was presented by Dr. Michael T. Kane, ETS's (Educational Testing Service) Samuel J. Messick Chair in Test Validity and the former Director of Research at the National Conference of Bar Examiners. Dr. Kane argues that it is important for policymakers to recognize the impact of errors of measurement…
Descriptors: Error of Measurement, Scores, Public Policy, Test Theory
Pages: 1  |  ...  |  102  |  103  |  104  |  105  |  106  |  107  |  108  |  109  |  110  |  ...  |  220