Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 16 |
Descriptor
Correlation | 18 |
Error Patterns | 18 |
Evaluation Methods | 18 |
Simulation | 6 |
Statistical Analysis | 6 |
Comparative Analysis | 5 |
Monte Carlo Methods | 5 |
Sample Size | 5 |
Foreign Countries | 4 |
Student Evaluation | 4 |
Computer Assisted Testing | 3 |
More ▼ |
Source
Author
An, Min | 1 |
Apple, Kristen | 1 |
Bogner, F. X. | 1 |
Brydges, Ryan | 1 |
Bulte, Isis | 1 |
Chan, Daniel W.-L. | 1 |
Chan, Wai | 1 |
Cimetta, Adriana D. | 1 |
Coleman, Edmund B. | 1 |
Conradty, C. | 1 |
Cunningham, James W. | 1 |
More ▼ |
Publication Type
Journal Articles | 17 |
Reports - Research | 14 |
Reports - Evaluative | 2 |
Collected Works - Proceedings | 1 |
Reports - Descriptive | 1 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 4 |
Postsecondary Education | 4 |
Secondary Education | 3 |
Grade 6 | 2 |
High Schools | 2 |
Elementary Education | 1 |
Intermediate Grades | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
Flesch Kincaid Grade Level… | 1 |
Lexile Scale of Reading | 1 |
Program for International… | 1 |
What Works Clearinghouse Rating
Dalton, Sarah Grace; Stark, Brielle C.; Fromm, Davida; Apple, Kristen; MacWhinney, Brian; Rensch, Amanda; Rowedder, Madyson – Journal of Speech, Language, and Hearing Research, 2022
Purpose: The aim of this study was to advance the use of structured, monologic discourse analysis by validating an automated scoring procedure for core lexicon (CoreLex) using transcripts. Method: Forty-nine transcripts from persons with aphasia and 48 transcripts from persons with no brain injury were retrieved from the AphasiaBank database. Five…
Descriptors: Validity, Discourse Analysis, Databases, Scoring
Cunningham, James W.; Hiebert, Elfrieda H.; Mesmer, Heidi Anne – Reading and Writing: An Interdisciplinary Journal, 2018
In recent years, readability formulas have gained new prominence as a basis for selecting texts for learning and assessment. Variables that quantitative tools count (e.g., word frequency, sentence length) provide valid measures of text complexity insofar as they accurately predict representative and high-quality criteria. The longstanding…
Descriptors: Readability, Readability Formulas, Evaluation Methods, Correlation
Tavares, Walter; Brydges, Ryan; Myre, Paul; Prpic, Jason; Turner, Linda; Yelle, Richard; Huiskamp, Maud – Advances in Health Sciences Education, 2018
Assessment of clinical competence is complex and inference based. Trustworthy and defensible assessment processes must have favourable evidence of validity, particularly where decisions are considered high stakes. We aimed to organize, collect and interpret validity evidence for a high stakes simulation based assessment strategy for certifying…
Descriptors: Competence, Simulation, Allied Health Personnel, Certification
Guo, Xiuyan; Lei, Pui-Wa – International Journal of Testing, 2020
Little research has been done on the effects of peer raters' quality characteristics on peer rating qualities. This study aims to address this gap and investigate the effects of key variables related to peer raters' qualities, including content knowledge, previous rating experience, training on rating tasks, and rating motivation. In an experiment…
Descriptors: Peer Evaluation, Error Patterns, Correlation, Knowledge Level
Yu, Chong Ho; Douglas, Samantha; Lee, Anna; An, Min – Practical Assessment, Research & Evaluation, 2016
This paper aims to illustrate how data visualization could be utilized to identify errors prior to modeling, using an example with multi-dimensional item response theory (MIRT). MIRT combines item response theory and factor analysis to identify a psychometric model that investigates two or more latent traits. While it may seem convenient to…
Descriptors: Visualization, Item Response Theory, Sample Size, Correlation
Wei, Tao; Schnur, Tatiana T. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2016
Processing semantically related stimuli creates interference across various domains of cognition, including language and memory. In this study, we identify the locus and mechanism of interference when retrieving meanings associated with words and pictures. Subjects matched a probe stimulus (e.g., cat) to its associated target picture (e.g., yarn)…
Descriptors: Semantics, Cues, Pictorial Stimuli, Interference (Learning)
English, John; English, Tammy – Journal of Information Technology Education: Innovations in Practice, 2015
In this paper we discuss the use of automated assessment in a variety of computer science courses that have been taught at Israel Academic College by the authors. The course assignments were assessed entirely automatically using Checkpoint, a web-based automated assessment framework. The assignments all used free-text questions (where the students…
Descriptors: Computer Science Education, Computer Assisted Testing, Foreign Countries, College Students
Socha, Alan; DeMars, Christine E. – Educational and Psychological Measurement, 2013
Modeling multidimensional test data with a unidimensional model can result in serious statistical errors, such as bias in item parameter estimates. Many methods exist for assessing the dimensionality of a test. The current study focused on DIMTEST. Using simulated data, the effects of sample size splitting for use with the ATFIND procedure for…
Descriptors: Sample Size, Test Length, Correlation, Test Format
Murayama, Kou; Sakaki, Michiko; Yan, Veronica X.; Smith, Garry M. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2014
In order to examine metacognitive accuracy (i.e., the relationship between metacognitive judgment and memory performance), researchers often rely on by-participant analysis, where metacognitive accuracy (e.g., resolution, as measured by the gamma coefficient or signal detection measures) is computed for each participant and the computed values are…
Descriptors: Metacognition, Memory, Accuracy, Statistical Analysis
Haardorfer, Regine; Gagne, Phill – Focus on Autism and Other Developmental Disabilities, 2010
Some researchers have argued for the use of or have attempted to make use of randomization tests in single-subject research. To address this tide of interest, the authors of this article describe randomization tests, discuss the theoretical rationale for applying them to single-subject research, and provide an overview of the methodological…
Descriptors: Research Design, Researchers, Evaluation Methods, Research Methodology
Conradty, C.; Bogner, F. X. – Educational Studies, 2012
Our study focuses on the correlation of concept map (CMap) structures and learning success tested with short answer tests, taking into particular account the complexity of the subject matter. Novice sixth grade students created CMaps about two subject matters of varying difficulty. The correlation of the complexity of CMaps with the post-test was…
Descriptors: Concept Mapping, Cognitive Structures, Grade 6, Correlation
Manolov, Rumen; Solanas, Antonio; Bulte, Isis; Onghena, Patrick – Journal of Experimental Education, 2010
This study deals with the statistical properties of a randomization test applied to an ABAB design in cases where the desirable random assignment of the points of change in phase is not possible. To obtain information about each possible data division, the authors carried out a conditional Monte Carlo simulation with 100,000 samples for each…
Descriptors: Monte Carlo Methods, Effect Size, Simulation, Evaluation Methods
Murphy, Daniel L.; Pituch, Keenan A. – Journal of Experimental Education, 2009
The authors examined the robustness of multilevel linear growth curve modeling to misspecification of an autoregressive moving average process. As previous research has shown (J. Ferron, R. Dailey, & Q. Yi, 2002; O. Kwok, S. G. West, & S. B. Green, 2007; S. Sivo, X. Fan, & L. Witta, 2005), estimates of the fixed effects were unbiased, and Type I…
Descriptors: Sample Size, Computation, Evaluation Methods, Longitudinal Studies
D'Agostino, Jerome V.; Welsh, Megan E.; Cimetta, Adriana D.; Falco, Lia D.; Smith, Shannon; VanWinkle, Waverely Hester; Powers, Sonya J. – Applied Measurement in Education, 2008
Central to the standards-based assessment validation process is an examination of the alignment between state standards and test items. Several alignment analysis systems have emerged recently, but most rely on either traditional rating or matching techniques. Little, if any, analyses have been reported on the degree of consistency between the two…
Descriptors: Test Items, Student Evaluation, State Standards, Evaluation Methods
Kromrey, Jeffrey D.; Rendina-Gobioff, Gianna – Educational and Psychological Measurement, 2006
The performance of methods for detecting publication bias in meta-analysis was evaluated using Monte Carlo methods. Four methods of bias detection were investigated: Begg's rank correlation, Egger's regression, funnel plot regression, and trim and fill. Five factors were included in the simulation design: number of primary studies in each…
Descriptors: Comparative Analysis, Meta Analysis, Monte Carlo Methods, Correlation
Previous Page | Next Page ยป
Pages: 1 | 2