NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 91 to 105 of 624 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Patton, Jeffrey M.; Cheng, Ying; Hong, Maxwell; Diao, Qi – Journal of Educational and Behavioral Statistics, 2019
In psychological and survey research, the prevalence and serious consequences of careless responses from unmotivated participants are well known. In this study, we propose to iteratively detect careless responders and cleanse the data by removing their responses. The careless responders are detected using person-fit statistics. In two simulation…
Descriptors: Test Items, Response Style (Tests), Identification, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Rice, Kenneth G.; Srisarajivakul, Emily N.; Meyers, Joel; Varjas, Kris – School Psychology, 2019
One evaluation measure available through the Positive Behavioral Interventions and Supports framework is the Effective Behavior Support Self-Assessment Survey (SAS). Evaluations of the SAS have supported its factor structure. However, the SAS is designed to be completed by school personnel who are nested within other levels of analysis (e.g.,…
Descriptors: Factor Analysis, Factor Structure, Self Evaluation (Individuals), Teacher Surveys
Peer reviewed Peer reviewed
Direct linkDirect link
Mameli, Consuelo; Passini, Stefano – Journal of Psychoeducational Assessment, 2019
The elusive character of student agency makes it a relevant construct to be investigated and measured. An initial effort in this direction was represented by the Agentic Engagement Scale, a five-item instrument designed to assess the degree to which students constructively contribute to the flow of the instructions they receive from the teacher.…
Descriptors: Measures (Individuals), Test Construction, Test Validity, Learner Engagement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Raborn, Anthony W.; Leite, Walter L.; Marcoulides, Katerina M. – International Educational Data Mining Society, 2019
Short forms of psychometric scales have been commonly used in educational and psychological research to reduce the burden of test administration. However, it is challenging to select items for a short form that preserve the validity and reliability of the scores of the original scale. This paper presents and evaluates multiple automated methods…
Descriptors: Psychometrics, Measures (Individuals), Mathematics, Heuristics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yormaz, Seha; Sünbül, Önder – Educational Sciences: Theory and Practice, 2017
This study aims to determine the Type I error rates and power of S[subscript 1] , S[subscript 2] indices and kappa statistic at detecting copying on multiple-choice tests under various conditions. It also aims to determine how copying groups are created in order to calculate how kappa statistics affect Type I error rates and power. In this study,…
Descriptors: Statistical Analysis, Cheating, Multiple Choice Tests, Sample Size
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kilic, Abdullah Faruk; Uysal, Ibrahim; Atar, Burcu – International Journal of Assessment Tools in Education, 2020
This Monte Carlo simulation study aimed to investigate confirmatory factor analysis (CFA) estimation methods under different conditions, such as sample size, distribution of indicators, test length, average factor loading, and factor structure. Binary data were generated to compare the performance of maximum likelihood (ML), mean and variance…
Descriptors: Factor Analysis, Computation, Methods, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Isbell, Daniel R.; Kremmel, Benjamin – Language Testing, 2020
Administration of high-stakes language proficiency tests has been disrupted in many parts of the world as a result of the 2019 novel coronavirus pandemic. Institutions that rely on test scores have been forced to adapt, and in many cases this means using scores from a different test, or a new online version of an existing test, that can be taken…
Descriptors: Language Tests, High Stakes Tests, Language Proficiency, Second Language Learning
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Manna, Venessa F.; Gu, Lixiong – ETS Research Report Series, 2019
When using the Rasch model, equating with a nonequivalent groups anchor test design is commonly achieved by adjustment of new form item difficulty using an additive equating constant. Using simulated 5-year data, this report compares 4 approaches to calculating the equating constants and the subsequent impact on equating results. The 4 approaches…
Descriptors: Item Response Theory, Test Items, Test Construction, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Qiu, Yuxi; Huggins-Manley, Anne Corinne – Educational and Psychological Measurement, 2019
This study aimed to assess the accuracy of the empirical item characteristic curve (EICC) preequating method given the presence of test speededness. The simulation design of this study considered the proportion of speededness, speededness point, speededness rate, proportion of missing on speeded items, sample size, and test length. After crossing…
Descriptors: Accuracy, Equated Scores, Test Items, Nonparametric Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Xiao, Yang; Koenig, Kathleen; Han, Jing; Liu, Jing; Liu, Qiaoyi; Bao, Lei – Physical Review Physics Education Research, 2019
Standardized concept inventories (CIs) have been widely used in science, technology, engineering, and mathematics education for assessment of student learning. In practice, there have been concerns regarding the length of the test and possible test-retest memory effect. To address these issues, a recent study developed a method to split a CI into…
Descriptors: Scientific Concepts, Science Tests, Energy, Magnets
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Damrongpanit, Suntonrapot – Universal Journal of Educational Research, 2019
The purposes of this study were to test the structural validity and to test the parameters invariance of the self-discipline measurement model for good student citizenship among the models, using the data from the 1,047 complete questionnaires and the reducing length questionnaires with multiple matrix sampling technique. The sample size of this…
Descriptors: Factor Structure, Questionnaires, Test Length, Citizenship
Lance M. Kruse – ProQuest LLC, 2019
This study explores six item-reduction methodologies used to shorten an existing complex problem-solving non-objective test by evaluating how each shortened form performs across three sources of validity evidence (i.e., test content, internal structure, and relationships with other variables). Two concerns prompted the development of the present…
Descriptors: Educational Assessment, Comparative Analysis, Test Format, Test Length
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kilic, Abdullah Faruk; Dogan, Nuri – International Journal of Assessment Tools in Education, 2021
Weighted least squares (WLS), weighted least squares mean-and-variance-adjusted (WLSMV), unweighted least squares mean-and-variance-adjusted (ULSMV), maximum likelihood (ML), robust maximum likelihood (MLR) and Bayesian estimation methods were compared in mixed item response type data via Monte Carlo simulation. The percentage of polytomous items,…
Descriptors: Factor Analysis, Computation, Least Squares Statistics, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Alessio, Helaine M.; Malay, Nancy; Maurer, Karsten; Bailer, A. John; Rubin, Beth – International Review of Research in Open and Distributed Learning, 2018
Traditional and online university courses share expectations for quality content and rigor. Student and faculty concerns about compromised academic integrity and actual instances of academic dishonesty in assessments, especially with online testing, are increasingly troublesome. Recent research suggests that in the absence of proctoring, the time…
Descriptors: Supervision, Majors (Students), Computer Assisted Testing, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Hamby, Tyler – Journal of Psychoeducational Assessment, 2018
In this study, the author examined potential mediators of the negative relationship between the absolute difference in items' lengths and their inter-item correlation size. Fifty-two randomly ordered items from five personality scales were administered to 622 university students, and 46 respondents from a survey website rated the items'…
Descriptors: Correlation, Personality Traits, Undergraduate Students, Difficulty Level
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  42