Publication Date
| In 2026 | 0 |
| Since 2025 | 59 |
| Since 2022 (last 5 years) | 416 |
| Since 2017 (last 10 years) | 919 |
| Since 2007 (last 20 years) | 1970 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Researchers | 93 |
| Practitioners | 23 |
| Teachers | 22 |
| Policymakers | 10 |
| Administrators | 5 |
| Students | 4 |
| Counselors | 2 |
| Parents | 2 |
| Community | 1 |
Location
| United States | 47 |
| Germany | 42 |
| Australia | 34 |
| Canada | 27 |
| Turkey | 27 |
| California | 22 |
| United Kingdom (England) | 20 |
| Netherlands | 18 |
| China | 17 |
| New York | 15 |
| United Kingdom | 15 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Does not meet standards | 1 |
Balbuena, Sherwin E.; Maligalig, Dalisay S.; Quimbo, Maria Ana T. – Online Submission, 2021
The University Student Depression Inventory (USDI; Khawaja and Bryden 2006) is a 30- item scale that is used to measure depressive symptoms among university students. Its psychometric properties have been widely investigated under the classical test theory (CTT). This study explored the application of the polytomous Rasch partial credit model…
Descriptors: Item Response Theory, Likert Scales, College Students, Depression (Psychology)
Fan Pan – ProQuest LLC, 2021
This dissertation informed researchers about the performance of different level-specific and target-specific model fit indices in Multilevel Latent Growth Model (MLGM) using unbalanced design and different trajectories. As the use of MLGMs is a relatively new field, this study helped further the field by informing researchers interested in using…
Descriptors: Goodness of Fit, Item Response Theory, Growth Models, Monte Carlo Methods
Wenjing Guo – ProQuest LLC, 2021
Constructed response (CR) items are widely used in large-scale testing programs, including the National Assessment of Educational Progress (NAEP) and many district and state-level assessments in the United States. One unique feature of CR items is that they depend on human raters to assess the quality of examinees' work. The judgment of human…
Descriptors: National Competency Tests, Responses, Interrater Reliability, Error of Measurement
Shear, Benjamin R.; Nordstokke, David W.; Zumbo, Bruno D. – Practical Assessment, Research & Evaluation, 2018
This computer simulation study evaluates the robustness of the nonparametric Levene test of equal variances (Nordstokke & Zumbo, 2010) when sampling from populations with unequal (and unknown) means. Testing for population mean differences when population variances are unknown and possibly unequal is often referred to as the Behrens-Fisher…
Descriptors: Nonparametric Statistics, Computer Simulation, Monte Carlo Methods, Sampling
Factor Structure and Psychometric Properties of the Digital Stress Scale in a Chinese College Sample
Chunlei Gao; Mingqing Jian; Ailin Yuan – SAGE Open, 2024
The Digital Stress Scale (DSS) is used to measure digital stress, which is the perceived stress and anxiety associated with social media use. In this study, the Chinese version of the DSS was validated using a sample of 721 Chinese college students, 321 males and 400 females (KMO = 0.923; Bartlett = 5,058.492, p < 0.001). Confirmatory factor…
Descriptors: Factor Structure, Factor Analysis, Psychometrics, Anxiety
Radu Bogdan Toma – Journal of Early Adolescence, 2024
The Expectancy-Value model has been extensively used to understand students' achievement motivation. However, recent studies propose the inclusion of cost as a separate construct from values, leading to the development of the Expectancy-Value-Cost model. This study aimed to adapt Kosovich et al.'s ("The Journal of Early Adolescence", 35,…
Descriptors: Student Motivation, Student Attitudes, Academic Achievement, Mathematics Achievement
Fiel, Jeremy E. – Sociology of Education, 2020
A long-standing consensus among sociologists holds that educational attainment has an equalizing effect that increases mobility by moderating other avenues of intergenerational status transmission. This study argues that the evidence supporting this consensus may be distorted by two problems: measurement error in parents' socioeconomic standing…
Descriptors: Educational Attainment, Social Mobility, Family Income, Longitudinal Studies
von Oertzen, Timo; Schmiedek, Florian; Voelkle, Manuel C. – Journal of Intelligence, 2020
Properties of psychological variables at the mean or variance level can differ between persons and within persons across multiple time points. For example, cross-sectional findings between persons of different ages do not necessarily reflect the development of a single person over time. Recently, there has been an increased interest in the…
Descriptors: Cognitive Ability, Individual Differences, Statistical Analysis, Factor Analysis
Lee, Won-Chan; Kim, Stella Y.; Choi, Jiwon; Kang, Yujin – Journal of Educational Measurement, 2020
This article considers psychometric properties of composite raw scores and transformed scale scores on mixed-format tests that consist of a mixture of multiple-choice and free-response items. Test scores on several mixed-format tests are evaluated with respect to conditional and overall standard errors of measurement, score reliability, and…
Descriptors: Raw Scores, Item Response Theory, Test Format, Multiple Choice Tests
Kopp, Jason P.; Jones, Andrew T. – Applied Measurement in Education, 2020
Traditional psychometric guidelines suggest that at least several hundred respondents are needed to obtain accurate parameter estimates under the Rasch model. However, recent research indicates that Rasch equating results in accurate parameter estimates with sample sizes as small as 25. Item parameter drift under the Rasch model has been…
Descriptors: Item Response Theory, Psychometrics, Sample Size, Sampling
Ippel, Lianne; Magis, David – Educational and Psychological Measurement, 2020
In dichotomous item response theory (IRT) framework, the asymptotic standard error (ASE) is the most common statistic to evaluate the precision of various ability estimators. Easy-to-use ASE formulas are readily available; however, the accuracy of some of these formulas was recently questioned and new ASE formulas were derived from a general…
Descriptors: Item Response Theory, Error of Measurement, Accuracy, Standards
Berchtold, André – International Journal of Social Research Methodology, 2019
Most quantitative studies in the social sciences suffer from missing data. However, despite the large availability of documents and software to treat such data, it appears that many social scientists do not apply good practices regarding missing data. We analyzed quantitative papers published in 2017 in six top-level social science journals.…
Descriptors: Research Problems, Research Methodology, Social Science Research, Data Analysis
Lai, Mark H. C. – Journal of Educational and Behavioral Statistics, 2019
Previous studies have detailed the consequence of ignoring a level of clustering in multilevel models with straightly hierarchical structures and have proposed methods to adjust for the fixed effect standard errors (SEs). However, in behavioral and social science research, there are usually two or more crossed clustering levels, such as when…
Descriptors: Error of Measurement, Hierarchical Linear Modeling, Least Squares Statistics, Statistical Bias
Sheng, Yanyan – Measurement: Interdisciplinary Research and Perspectives, 2019
Classical approach to test theory has been the foundation for educational and psychological measurement for over 90 years. This approach concerns with measurement error and hence test reliability, which in part relies on individual test items. The CTT package, developed in light of this, provides functions for test- and item-level analyses of…
Descriptors: Item Response Theory, Test Reliability, Item Analysis, Error of Measurement
Van Norman, Ethan R.; Klingbeil, David A.; McLendon, Katherine E. – Remedial and Special Education, 2019
Researchers and practitioners frequently use curriculum-based measures of reading (CBM-R) within single-case design (SCD) frameworks to evaluate the effects of reading interventions with individual students. Effect sizes (ESs) developed specifically for SCDs are often used as a supplement to visual analysis to gauge treatment effects. The degree…
Descriptors: Oral Reading, Error of Measurement, Progress Monitoring, Effect Size

Peer reviewed
Direct link
