NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Educational and Psychological…28
Audience
Researchers1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 28 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Betts, Joe; Muntean, William; Kim, Doyoung; Kao, Shu-chuan – Educational and Psychological Measurement, 2022
The multiple response structure can underlie several different technology-enhanced item types. With the increased use of computer-based testing, multiple response items are becoming more common. This response type holds the potential for being scored polytomously for partial credit. However, there are several possible methods for computing raw…
Descriptors: Scoring, Test Items, Test Format, Raw Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Walters, Glenn D.; Espelage, Dorothy L. – Educational and Psychological Measurement, 2019
The purpose of this study was to investigate the latent structure type (categorical vs. dimensional) of bullying perpetration in a large sample of middle school students. A nine-item bullying scale was administered to 1,222 (625 boys, 597 girls) early adolescents enrolled in middle schools in a Midwestern state. Based on the results of a principal…
Descriptors: Early Adolescents, Bullying, Middle School Students, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Harrison, Allyson G.; Butt, Kaitlyn; Armstrong, Irene – Educational and Psychological Measurement, 2019
There has been a marked increase in accommodation requests from students with disabilities at both the postsecondary education level and on high-stakes examinations. As such, accurate identification and quantification of normative impairment is essential for equitable provision of accommodations. Considerable diversity currently exists in methods…
Descriptors: Achievement Tests, Test Norms, Age, Instructional Program Divisions
Peer reviewed Peer reviewed
Direct linkDirect link
Lenhard, Wolfgang; Lenhard, Alexandra – Educational and Psychological Measurement, 2021
The interpretation of psychometric test results is usually based on norm scores. We compared semiparametric continuous norming (SPCN) with conventional norming methods by simulating results for test scales with different item numbers and difficulties via an item response theory approach. Subsequently, we modeled the norm scores based on random…
Descriptors: Test Norms, Scores, Regression (Statistics), Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Ho, Andrew D.; Yu, Carol C. – Educational and Psychological Measurement, 2015
Many statistical analyses benefit from the assumption that unconditional or conditional distributions are continuous and normal. More than 50 years ago in this journal, Lord and Cook chronicled departures from normality in educational tests, and Micerri similarly showed that the normality assumption is met rarely in educational and psychological…
Descriptors: Statistics, Scores, Statistical Distributions, Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Mylonas, Kostas; Furnham, Adrian; Divale, William; Leblebici, Cigdem; Gondim, Sonia; Moniz, Angela; Grad, Hector; Alvaro, Jose Luis; Cretu, Romeo Zeno; Filus, Ania; Boski, Pawel – Educational and Psychological Measurement, 2014
Several sources of bias can plague research data and individual assessment. When cultural groups are considered, across or even within countries, it is essential that the constructs assessed and evaluated are as free as possible from any source of bias and specifically from bias caused due to culturally specific characteristics. Employing the…
Descriptors: Test Bias, Measures (Individuals), Unemployment, Adults
Peer reviewed Peer reviewed
Direct linkDirect link
Magis, David; De Boeck, Paul – Educational and Psychological Measurement, 2012
The identification of differential item functioning (DIF) is often performed by means of statistical approaches that consider the raw scores as proxies for the ability trait level. One of the most popular approaches, the Mantel-Haenszel (MH) method, belongs to this category. However, replacing the ability level by the simple raw score is a source…
Descriptors: Test Bias, Data, Error of Measurement, Raw Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Puhan, Gautam; von Davier, Alina A.; Gupta, Shaloo – Educational and Psychological Measurement, 2010
Equating under the external anchor design is frequently conducted using scaled scores on the anchor test. However, scaled scores often lead to the unique problem of creating zero frequencies in the score distribution because there may not always be a one-to-one correspondence between raw and scaled scores. For example, raw scores of 17 and 18 may…
Descriptors: Statistical Distributions, Raw Scores, Equated Scores, Scaling
Peer reviewed Peer reviewed
Direct linkDirect link
Bowden, Stephen C.; Lange, Rael T.; Weiss, Lawrence G.; Saklofske, Donald H. – Educational and Psychological Measurement, 2008
A measurement model is invoked whenever a psychological interpretation is placed on test scores. When stated in detail, a measurement model provides a description of the numerical and theoretical relationship between observed scores and the corresponding latent variables or constructs. In this way, the hypothesis that similar meaning can be…
Descriptors: Intelligence, Intelligence Tests, Measures (Individuals), Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
DeMars, Christine E. – Educational and Psychological Measurement, 2008
The graded response (GR) and generalized partial credit (GPC) models do not imply that examinees ordered by raw observed score will necessarily be ordered on the expected value of the latent trait (OEL). Factors were manipulated to assess whether increased violations of OEL also produced increased Type I error rates in differential item…
Descriptors: Test Items, Raw Scores, Test Theory, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Willse, John T.; Goodman, Joshua T. – Educational and Psychological Measurement, 2008
This research provides a direct comparison of effect size estimates based on structural equation modeling (SEM), item response theory (IRT), and raw scores. Differences between the SEM, IRT, and raw score approaches are examined under a variety of data conditions (IRT models underlying the data, test lengths, magnitude of group differences, and…
Descriptors: Test Length, Structural Equation Models, Effect Size, Raw Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Arce-Ferrer, Alvaro J.; Guzman, Elvira Martinez – Educational and Psychological Measurement, 2009
This study investigates the effect of mode of administration of the Raven Standard Progressive Matrices test on distribution, accuracy, and meaning of raw scores. A random sample of high school students take counterbalanced paper-and-pencil and computer-based administrations of the test and answer a questionnaire surveying preferences for…
Descriptors: Factor Analysis, Raw Scores, Statistical Analysis, Computer Assisted Testing
Peer reviewed Peer reviewed
Lee, Jae-Won – Educational and Psychological Measurement, 1977
McQuitty has developed a number of pattern analytic methods that can be computed by hand, but the matrices of associations used in these methods cannot be so readily computed. A simplified but exact method of computing product moment correlations based on Q sort data for McQuitty's methods is described. (Author/JKS)
Descriptors: Correlation, Factor Analysis, Q Methodology, Raw Scores
Peer reviewed Peer reviewed
Wimberley, Ronald C. – Educational and Psychological Measurement, 1975
Describes a program for the T-score technique of normal standardization. T-scores transform a raw score distribution, regardless of its skewness or kurtosis, into a normal distribution with a mean of 50 and a standard deviation of 10. (Author/RC)
Descriptors: Computer Programs, Raw Scores, Scores, Statistical Analysis
Peer reviewed Peer reviewed
Reckase, Mark D. – Educational and Psychological Measurement, 1973
A program listing, source deck, sample problem and documentation are available at cost from the author at the University of Missouri, Department of Educational Psychology, Columbia, Missouri 65201. (Author/CB)
Descriptors: Computer Programs, Input Output, Raw Scores, Scaling
Previous Page | Next Page ยป
Pages: 1  |  2