Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 13 |
Descriptor
Raw Scores | 28 |
Statistical Analysis | 9 |
Scores | 8 |
Error of Measurement | 6 |
Factor Analysis | 6 |
Item Response Theory | 5 |
Foreign Countries | 4 |
Measures (Individuals) | 4 |
Test Theory | 4 |
Comparative Analysis | 3 |
Computer Programs | 3 |
More ▼ |
Source
Educational and Psychological… | 28 |
Author
Aleamoni, Lawrence, M. | 1 |
Alvaro, Jose Luis | 1 |
Arce-Ferrer, Alvaro J. | 1 |
Armstrong, Irene | 1 |
Betts, Joe | 1 |
Boldt, R. F. | 1 |
Boski, Pawel | 1 |
Bowden, Stephen C. | 1 |
Butt, Kaitlyn | 1 |
Chang, Shun-Wen | 1 |
Cretu, Romeo Zeno | 1 |
More ▼ |
Publication Type
Journal Articles | 18 |
Reports - Research | 13 |
Reports - Evaluative | 3 |
Reports - Descriptive | 1 |
Education Level
Junior High Schools | 2 |
High Schools | 1 |
Higher Education | 1 |
Middle Schools | 1 |
Postsecondary Education | 1 |
Secondary Education | 1 |
Audience
Researchers | 1 |
Location
United States | 2 |
Brazil | 1 |
Canada | 1 |
Greece | 1 |
Illinois | 1 |
Mexico | 1 |
Poland | 1 |
Romania | 1 |
Spain | 1 |
Taiwan | 1 |
Turkey | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Minnesota Multiphasic… | 1 |
Wechsler Adult Intelligence… | 1 |
Woodcock Johnson Tests of… | 1 |
What Works Clearinghouse Rating
Betts, Joe; Muntean, William; Kim, Doyoung; Kao, Shu-chuan – Educational and Psychological Measurement, 2022
The multiple response structure can underlie several different technology-enhanced item types. With the increased use of computer-based testing, multiple response items are becoming more common. This response type holds the potential for being scored polytomously for partial credit. However, there are several possible methods for computing raw…
Descriptors: Scoring, Test Items, Test Format, Raw Scores
Walters, Glenn D.; Espelage, Dorothy L. – Educational and Psychological Measurement, 2019
The purpose of this study was to investigate the latent structure type (categorical vs. dimensional) of bullying perpetration in a large sample of middle school students. A nine-item bullying scale was administered to 1,222 (625 boys, 597 girls) early adolescents enrolled in middle schools in a Midwestern state. Based on the results of a principal…
Descriptors: Early Adolescents, Bullying, Middle School Students, Scores
Harrison, Allyson G.; Butt, Kaitlyn; Armstrong, Irene – Educational and Psychological Measurement, 2019
There has been a marked increase in accommodation requests from students with disabilities at both the postsecondary education level and on high-stakes examinations. As such, accurate identification and quantification of normative impairment is essential for equitable provision of accommodations. Considerable diversity currently exists in methods…
Descriptors: Achievement Tests, Test Norms, Age, Instructional Program Divisions
Lenhard, Wolfgang; Lenhard, Alexandra – Educational and Psychological Measurement, 2021
The interpretation of psychometric test results is usually based on norm scores. We compared semiparametric continuous norming (SPCN) with conventional norming methods by simulating results for test scales with different item numbers and difficulties via an item response theory approach. Subsequently, we modeled the norm scores based on random…
Descriptors: Test Norms, Scores, Regression (Statistics), Test Items
Ho, Andrew D.; Yu, Carol C. – Educational and Psychological Measurement, 2015
Many statistical analyses benefit from the assumption that unconditional or conditional distributions are continuous and normal. More than 50 years ago in this journal, Lord and Cook chronicled departures from normality in educational tests, and Micerri similarly showed that the normality assumption is met rarely in educational and psychological…
Descriptors: Statistics, Scores, Statistical Distributions, Tests
Mylonas, Kostas; Furnham, Adrian; Divale, William; Leblebici, Cigdem; Gondim, Sonia; Moniz, Angela; Grad, Hector; Alvaro, Jose Luis; Cretu, Romeo Zeno; Filus, Ania; Boski, Pawel – Educational and Psychological Measurement, 2014
Several sources of bias can plague research data and individual assessment. When cultural groups are considered, across or even within countries, it is essential that the constructs assessed and evaluated are as free as possible from any source of bias and specifically from bias caused due to culturally specific characteristics. Employing the…
Descriptors: Test Bias, Measures (Individuals), Unemployment, Adults
Magis, David; De Boeck, Paul – Educational and Psychological Measurement, 2012
The identification of differential item functioning (DIF) is often performed by means of statistical approaches that consider the raw scores as proxies for the ability trait level. One of the most popular approaches, the Mantel-Haenszel (MH) method, belongs to this category. However, replacing the ability level by the simple raw score is a source…
Descriptors: Test Bias, Data, Error of Measurement, Raw Scores
Puhan, Gautam; von Davier, Alina A.; Gupta, Shaloo – Educational and Psychological Measurement, 2010
Equating under the external anchor design is frequently conducted using scaled scores on the anchor test. However, scaled scores often lead to the unique problem of creating zero frequencies in the score distribution because there may not always be a one-to-one correspondence between raw and scaled scores. For example, raw scores of 17 and 18 may…
Descriptors: Statistical Distributions, Raw Scores, Equated Scores, Scaling
Bowden, Stephen C.; Lange, Rael T.; Weiss, Lawrence G.; Saklofske, Donald H. – Educational and Psychological Measurement, 2008
A measurement model is invoked whenever a psychological interpretation is placed on test scores. When stated in detail, a measurement model provides a description of the numerical and theoretical relationship between observed scores and the corresponding latent variables or constructs. In this way, the hypothesis that similar meaning can be…
Descriptors: Intelligence, Intelligence Tests, Measures (Individuals), Foreign Countries
DeMars, Christine E. – Educational and Psychological Measurement, 2008
The graded response (GR) and generalized partial credit (GPC) models do not imply that examinees ordered by raw observed score will necessarily be ordered on the expected value of the latent trait (OEL). Factors were manipulated to assess whether increased violations of OEL also produced increased Type I error rates in differential item…
Descriptors: Test Items, Raw Scores, Test Theory, Error of Measurement
Willse, John T.; Goodman, Joshua T. – Educational and Psychological Measurement, 2008
This research provides a direct comparison of effect size estimates based on structural equation modeling (SEM), item response theory (IRT), and raw scores. Differences between the SEM, IRT, and raw score approaches are examined under a variety of data conditions (IRT models underlying the data, test lengths, magnitude of group differences, and…
Descriptors: Test Length, Structural Equation Models, Effect Size, Raw Scores
Arce-Ferrer, Alvaro J.; Guzman, Elvira Martinez – Educational and Psychological Measurement, 2009
This study investigates the effect of mode of administration of the Raven Standard Progressive Matrices test on distribution, accuracy, and meaning of raw scores. A random sample of high school students take counterbalanced paper-and-pencil and computer-based administrations of the test and answer a questionnaire surveying preferences for…
Descriptors: Factor Analysis, Raw Scores, Statistical Analysis, Computer Assisted Testing

Lee, Jae-Won – Educational and Psychological Measurement, 1977
McQuitty has developed a number of pattern analytic methods that can be computed by hand, but the matrices of associations used in these methods cannot be so readily computed. A simplified but exact method of computing product moment correlations based on Q sort data for McQuitty's methods is described. (Author/JKS)
Descriptors: Correlation, Factor Analysis, Q Methodology, Raw Scores

Wimberley, Ronald C. – Educational and Psychological Measurement, 1975
Describes a program for the T-score technique of normal standardization. T-scores transform a raw score distribution, regardless of its skewness or kurtosis, into a normal distribution with a mean of 50 and a standard deviation of 10. (Author/RC)
Descriptors: Computer Programs, Raw Scores, Scores, Statistical Analysis

Reckase, Mark D. – Educational and Psychological Measurement, 1973
A program listing, source deck, sample problem and documentation are available at cost from the author at the University of Missouri, Department of Educational Psychology, Columbia, Missouri 65201. (Author/CB)
Descriptors: Computer Programs, Input Output, Raw Scores, Scaling
Previous Page | Next Page ยป
Pages: 1 | 2