NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 751 to 765 of 47,466 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Bramley, Tom; Crisp, Victoria – Assessment in Education: Principles, Policy & Practice, 2019
For many years, question choice has been used in some UK public examinations, with students free to choose which questions they answer from a selection (within certain parameters). There has been little published research on choice of exam questions in recent years in the UK. In this article we distinguish different scenarios in which choice…
Descriptors: Test Items, Test Construction, Difficulty Level, Foreign Countries
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kilinc, Hakan; Okur, Muhammet Recep; Usta, Ilker – International Journal of Assessment Tools in Education, 2021
Within the scope of this study, it was aimed to determine the factors that should be considered regarding the usability and security of online test applications used as an assessment and evaluation tool during the COVID-19 pandemic. In this context, the case study method was used to obtain the opinions of field experts. Furthermore, in this study,…
Descriptors: COVID-19, Pandemics, Computer Assisted Testing, Evaluation Methods
Jimenez, Laura; Modaffari, Jamil – Center for American Progress, 2021
Assessments are a way for stakeholders in education to understand what students know and can do. They can take many forms, including but not limited to paper and pencil or computer-adaptive formats. However, assessments do not have to be tests in the traditional sense at all; rather, they can be carried out through teacher observations of students…
Descriptors: Equal Education, Elementary Secondary Education, Futures (of Society), Computer Assisted Testing
Peter Stern – ProQuest LLC, 2021
Across the country, school districts are increasingly seeking out privately contracted psychologists to conduct psychological evaluations. As such, it is increasingly important that psychological reports adhere to best practices and are written to ensure comprehension by both parents and teachers. This study explored the potential differences…
Descriptors: Teachers, Special Education Teachers, Teacher Attitudes, Psychological Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Beck, Klaus – Frontline Learning Research, 2020
Many test developers try to ensure the content validity of their tests by having external experts review the items, e.g. in terms of relevance, difficulty, or clarity. Although this approach is widely accepted, a closer look reveals several pitfalls need to be avoided if experts' advice is to be truly helpful. The purpose of this paper is to…
Descriptors: Content Validity, Psychological Testing, Educational Testing, Student Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sahin, Melek Gulsah – International Journal of Assessment Tools in Education, 2020
Computer Adaptive Multistage Testing (ca-MST), which take the advantage of computer technology and adaptive test form, are widely used, and are now a popular issue of assessment and evaluation. This study aims at analyzing the effect of different panel designs, module lengths, and different sequence of a parameter value across stages and change in…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Madsen, Adrian; McKagan, Sarah B.; Sayre, Eleanor C. – Physics Teacher, 2020
Physics faculty care about their students learning physics content. In addition, they usually hope that their students will learn some deeper lessons about thinking critically and scientifically. They hope that as a result of taking a physics class, students will come to appreciate physics as a coherent and logical method of understanding the…
Descriptors: Science Instruction, Physics, Student Surveys, Student Attitudes
Peer reviewed Peer reviewed
Direct linkDirect link
Peltier, Corey; Vannest, Kimberly J.; Tomaszewski, Brianne R.; Morin, Kristi; Sallese, Mary Rose; Pulos, Joshua M. – Exceptionality, 2022
The current study examined the criterion validity of a computer adaptive universal screener with an end-of-year state mathematics assessment using extant data provided by a local education agency. Participants included 1,195 third through eighth graders. Correlational analyses were used to report predictive and concurrent validity coefficients for…
Descriptors: Adaptive Testing, Computer Assisted Testing, Screening Tests, Mathematics Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jones, Daniel Marc; Cheng, Liying; Tweedie, M. Gregory – Canadian Journal of Learning and Technology, 2022
This article reviews recent literature (2011-present) on the automated scoring (AS) of writing and speaking. Its purpose is to first survey the current research on automated scoring of language, then highlight how automated scoring impacts the present and future of assessment, teaching, and learning. The article begins by outlining the general…
Descriptors: Automation, Computer Assisted Testing, Scoring, Writing (Composition)
Rogers, Christopher M.; Ressa, Virginia A.; Thurlow, Martha L.; Lazarus, Sheryl S. – National Center on Educational Outcomes, 2022
This report provides an update on the state of the research on testing accommodations. Previous reports by the National Center on Educational Outcomes (NCEO) have covered research published since 1999. In this report, we summarize the research published in 2020. During 2020, 11 research studies addressed testing accommodations in the U.S. K-12…
Descriptors: Elementary Secondary Education, Testing Accommodations, Students with Disabilities, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Crooks, Noelle M.; Bartel, Anna N.; Alibali, Martha W. – Statistics Education Research Journal, 2019
In recent years, there have been calls for researchers to report and interpret confidence intervals (CIs) rather than relying solely on p-values. Such reforms, however, may be hindered by a general lack of understanding of CIs and how to interpret them. In this study, we assessed conceptual knowledge of CIs in undergraduate and graduate psychology…
Descriptors: Undergraduate Students, Graduate Students, Psychology, Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Schweizer, Karl; Reiß, Siegbert; Troche, Stefan – Educational and Psychological Measurement, 2019
The article reports three simulation studies conducted to find out whether the effect of a time limit for testing impairs model fit in investigations of structural validity, whether the representation of the assumed source of the effect prevents impairment of model fit and whether it is possible to identify and discriminate this method effect from…
Descriptors: Timed Tests, Testing, Barriers, Testing Problems
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Crooks, Noelle M.; Bartel, Anna N.; Alibali, Martha W. – Grantee Submission, 2019
In recent years, there have been calls for researchers to report and interpret confidence intervals (CIs) rather than relying solely on p-values. Such reforms, however, may be hindered by a general lack of understanding of CIs and how to interpret them. In this study, we assessed conceptual knowledge of CIs in undergraduate and graduate psychology…
Descriptors: Undergraduate Students, Graduate Students, Psychology, Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Veale, Clinton G. L. – Journal of Chemical Education, 2022
The shift to online education has dealt several important pedagogical lessons in a very short time frame, none more so than approaches to assessments. This experience, coupled to improved access to education, will likely see online assessments remain as a major component of modern mainstream education. The powerful technologies that facilitate the…
Descriptors: Visual Aids, Search Engines, Computer Assisted Testing, Chemistry
Peer reviewed Peer reviewed
Direct linkDirect link
Ranger, Jochen; Brauer, Kay – Journal of Educational and Behavioral Statistics, 2022
The generalized S-X[superscript 2]-test is a test of item fit for items with polytomous responses format. The test is based on a comparison of the observed and expected number of responses in strata defined by the test score. In this article, we make four contributions. We demonstrate that the performance of the generalized S-X[superscript 2]-test…
Descriptors: Goodness of Fit, Test Items, Statistical Analysis, Item Response Theory
Pages: 1  |  ...  |  47  |  48  |  49  |  50  |  51  |  52  |  53  |  54  |  55  |  ...  |  3165