NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 751 to 765 of 47,458 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Beck, Klaus – Frontline Learning Research, 2020
Many test developers try to ensure the content validity of their tests by having external experts review the items, e.g. in terms of relevance, difficulty, or clarity. Although this approach is widely accepted, a closer look reveals several pitfalls need to be avoided if experts' advice is to be truly helpful. The purpose of this paper is to…
Descriptors: Content Validity, Psychological Testing, Educational Testing, Student Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sahin, Melek Gulsah – International Journal of Assessment Tools in Education, 2020
Computer Adaptive Multistage Testing (ca-MST), which take the advantage of computer technology and adaptive test form, are widely used, and are now a popular issue of assessment and evaluation. This study aims at analyzing the effect of different panel designs, module lengths, and different sequence of a parameter value across stages and change in…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Madsen, Adrian; McKagan, Sarah B.; Sayre, Eleanor C. – Physics Teacher, 2020
Physics faculty care about their students learning physics content. In addition, they usually hope that their students will learn some deeper lessons about thinking critically and scientifically. They hope that as a result of taking a physics class, students will come to appreciate physics as a coherent and logical method of understanding the…
Descriptors: Science Instruction, Physics, Student Surveys, Student Attitudes
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Crooks, Noelle M.; Bartel, Anna N.; Alibali, Martha W. – Statistics Education Research Journal, 2019
In recent years, there have been calls for researchers to report and interpret confidence intervals (CIs) rather than relying solely on p-values. Such reforms, however, may be hindered by a general lack of understanding of CIs and how to interpret them. In this study, we assessed conceptual knowledge of CIs in undergraduate and graduate psychology…
Descriptors: Undergraduate Students, Graduate Students, Psychology, Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Schweizer, Karl; Reiß, Siegbert; Troche, Stefan – Educational and Psychological Measurement, 2019
The article reports three simulation studies conducted to find out whether the effect of a time limit for testing impairs model fit in investigations of structural validity, whether the representation of the assumed source of the effect prevents impairment of model fit and whether it is possible to identify and discriminate this method effect from…
Descriptors: Timed Tests, Testing, Barriers, Testing Problems
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Crooks, Noelle M.; Bartel, Anna N.; Alibali, Martha W. – Grantee Submission, 2019
In recent years, there have been calls for researchers to report and interpret confidence intervals (CIs) rather than relying solely on p-values. Such reforms, however, may be hindered by a general lack of understanding of CIs and how to interpret them. In this study, we assessed conceptual knowledge of CIs in undergraduate and graduate psychology…
Descriptors: Undergraduate Students, Graduate Students, Psychology, Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Peltier, Corey; Vannest, Kimberly J.; Tomaszewski, Brianne R.; Morin, Kristi; Sallese, Mary Rose; Pulos, Joshua M. – Exceptionality, 2022
The current study examined the criterion validity of a computer adaptive universal screener with an end-of-year state mathematics assessment using extant data provided by a local education agency. Participants included 1,195 third through eighth graders. Correlational analyses were used to report predictive and concurrent validity coefficients for…
Descriptors: Adaptive Testing, Computer Assisted Testing, Screening Tests, Mathematics Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jones, Daniel Marc; Cheng, Liying; Tweedie, M. Gregory – Canadian Journal of Learning and Technology, 2022
This article reviews recent literature (2011-present) on the automated scoring (AS) of writing and speaking. Its purpose is to first survey the current research on automated scoring of language, then highlight how automated scoring impacts the present and future of assessment, teaching, and learning. The article begins by outlining the general…
Descriptors: Automation, Computer Assisted Testing, Scoring, Writing (Composition)
Rogers, Christopher M.; Ressa, Virginia A.; Thurlow, Martha L.; Lazarus, Sheryl S. – National Center on Educational Outcomes, 2022
This report provides an update on the state of the research on testing accommodations. Previous reports by the National Center on Educational Outcomes (NCEO) have covered research published since 1999. In this report, we summarize the research published in 2020. During 2020, 11 research studies addressed testing accommodations in the U.S. K-12…
Descriptors: Elementary Secondary Education, Testing Accommodations, Students with Disabilities, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Veale, Clinton G. L. – Journal of Chemical Education, 2022
The shift to online education has dealt several important pedagogical lessons in a very short time frame, none more so than approaches to assessments. This experience, coupled to improved access to education, will likely see online assessments remain as a major component of modern mainstream education. The powerful technologies that facilitate the…
Descriptors: Visual Aids, Search Engines, Computer Assisted Testing, Chemistry
Peer reviewed Peer reviewed
Direct linkDirect link
Ranger, Jochen; Brauer, Kay – Journal of Educational and Behavioral Statistics, 2022
The generalized S-X[superscript 2]-test is a test of item fit for items with polytomous responses format. The test is based on a comparison of the observed and expected number of responses in strata defined by the test score. In this article, we make four contributions. We demonstrate that the performance of the generalized S-X[superscript 2]-test…
Descriptors: Goodness of Fit, Test Items, Statistical Analysis, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Rios, Joseph A.; Deng, Jiayi; Ihlenfeldt, Samuel D. – Educational Assessment, 2022
The present meta-analysis sought to quantify the average degree of aggregated test score distortion due to rapid guessing (RG). Included studies group-administered a low-stakes cognitive assessment, identified RG via response times, and reported the rate of examinees engaging in RG, the percentage of RG responses observed, and/or the degree of…
Descriptors: Guessing (Tests), Testing Problems, Scores, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Schoch, Kerstin; Ostermann, Thomas – Creativity Research Journal, 2022
Although art has been subject to psychological research for some time, the artwork itself received little attention in quantitative research. The rating instrument for two-dimensional pictorial works ("RizbA") fills this gap by providing a tool for formal picture analysis. This study validates the questionnaire on 294 images created by…
Descriptors: Psychometrics, Art, Measures (Individuals), Visual Arts
Peer reviewed Peer reviewed
Direct linkDirect link
Williams, Hollis – Physics Teacher, 2022
It is well known that Newton's work on mechanics depended in a crucial way on the previous observations of Galileo. The key insight of Galileo was that one can analyze the motion of bodies using experiments and mathematical equations. One experimental observation that roughly emerges from this work in modern terms is that two objects of different…
Descriptors: Scientific Principles, Mechanics (Physics), Motion, Equations (Mathematics)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
McCaffrey, Daniel F.; Casabianca, Jodi M.; Ricker-Pedley, Kathryn L.; Lawless, René R.; Wendler, Cathy – ETS Research Report Series, 2022
This document describes a set of best practices for developing, implementing, and maintaining the critical process of scoring constructed-response tasks. These practices address both the use of human raters and automated scoring systems as part of the scoring process and cover the scoring of written, spoken, performance, or multimodal responses.…
Descriptors: Best Practices, Scoring, Test Format, Computer Assisted Testing
Pages: 1  |  ...  |  47  |  48  |  49  |  50  |  51  |  52  |  53  |  54  |  55  |  ...  |  3164