NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 496 to 510 of 3,711 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Fan, Xumei; Johnson, Robert; Liu, Xiumei – New Waves-Educational Research and Development Journal, 2017
The purpose of this study was to investigate Chinese university professors' perceptions about the ethicality of classroom assessment practices. In a survey of Chinese professors, participants completed a questionnaire with 15 scenarios that depicted ethical and unethical assessment practices. Participants consisted of 555 professors from 143…
Descriptors: Foreign Countries, College Faculty, Ethics, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Yi-Jui Iva; Wilson, Mark; Irey, Robin C.; Requa, Mary K. – Language Testing, 2020
Orthographic processing -- the ability to perceive, access, differentiate, and manipulate orthographic knowledge -- is essential when learning to recognize words. Despite its critical importance in literacy acquisition, the field lacks a tool to assess this essential cognitive ability. The goal of this study was to design a computer-based…
Descriptors: Orthographic Symbols, Spelling, Word Recognition, Reading Skills
Peer reviewed Peer reviewed
Direct linkDirect link
Mittelhaëuser, Marie-Anne; Béguin, Anton A.; Sijtsma, Klaas – Journal of Educational Measurement, 2015
The purpose of this study was to investigate whether simulated differential motivation between the stakes for operational tests and anchor items produces an invalid linking result if the Rasch model is used to link the operational tests. This was done for an external anchor design and a variation of a pretest design. The study also investigated…
Descriptors: Item Response Theory, Simulation, High Stakes Tests, Pretesting
Peer reviewed Peer reviewed
Direct linkDirect link
Orsini, A.; Pezzuti, L.; Hulbert, S. – Journal of Intellectual Disability Research, 2015
Background: It is now widely known that children with severe intellectual disability show a 'floor effect' on the Wechsler scales. This effect emerges because the practice of transforming raw scores into scaled scores eliminates any variability present in participants with low intellectual ability and because intelligence quotient (IQ) scores are…
Descriptors: Severe Mental Retardation, Raw Scores, Scores, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Ventista, Ourania Maria – E-Learning and Digital Media, 2018
Massive Open Online Courses appear to have high attrition rates, involve students in peer-assessment with patriotic bias and promote education for already educated people. This paper suggests a formative assessment model which takes into consideration these issues. Specifically, this paper focuses on the assessment of open-format questions in…
Descriptors: Student Evaluation, Self Evaluation (Individuals), Large Group Instruction, Online Courses
Peer reviewed Peer reviewed
Direct linkDirect link
Shah, Lisa; Hao, Jie; Rodriguez, Christian A.; Fallin, Rebekah; Linenberger-Cortes, Kimberly; Ray, Herman E.; Rushton, Gregory T. – Physical Review Physics Education Research, 2018
A generally agreed-upon tenant of the physics teaching community is the centrality of subject-specific expertise in effective teaching. However, studies which assess the content knowledge of incoming K-12 physics teachers in the U.S. have not yet been reported. Similarly lacking are studies on if or how the demographic makeup of aspiring physics…
Descriptors: Praxis, Physics, Expertise, Demography
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Siddiqui, Ali; Sartaj, Shabana; Shah, Syed Waqar Ali – Advances in Language and Literary Studies, 2018
The language assessments and testing are the crucial aspects of teaching and learning processes. Therefore, the following study is aimed to focus on these two most important aspects with refer to a critical view on its practical aspect that is after passing through a tactful teaching process. The crucial notion of practicing language assessments…
Descriptors: English (Second Language), Second Language Learning, Language Tests, Learning Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Wedman, Jonathan – Scandinavian Journal of Educational Research, 2018
Gender fairness in testing can be impeded by the presence of differential item functioning (DIF), which potentially causes test bias. In this study, the presence and causes of gender-related DIF were investigated with real data from 800 items answered by 250,000 test takers. DIF was examined using the Mantel-Haenszel and logistic regression…
Descriptors: Gender Differences, College Entrance Examinations, Test Items, Vocabulary
New Meridian Corporation, 2020
New Meridian Corporation has developed the "Quality Testing Standards and Criteria for Comparability Claims" (QTS) to provide guidance to states that are interested in including New Meridian content and would like to either keep reporting scores on the New Meridian Scale or use the New Meridian performance levels; that is, the state…
Descriptors: Testing, Standards, Comparative Analysis, Test Content
Peer reviewed Peer reviewed
Direct linkDirect link
Luo, Xin; Reckase, Mark D.; He, Wei – AERA Online Paper Repository, 2016
While dichotomous item dominates the application of computerized adaptive testing (CAT), polytomous item and set-based item hold promises for being incorporated in CAT. However, how to assemble a CAT containing mixed item formats is challenging. This study investigated: (1) how the mixed CAT works compared with the dichotomous-item-based CAT; (2)…
Descriptors: Test Items, Test Format, Computer Assisted Testing, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Hidalgo, Ma Dolores; Benítez, Isabel; Padilla, Jose-Luis; Gómez-Benito, Juana – Sociological Methods & Research, 2017
The growing use of scales in survey questionnaires warrants the need to address how does polytomous differential item functioning (DIF) affect observed scale score comparisons. The aim of this study is to investigate the impact of DIF on the type I error and effect size of the independent samples t-test on the observed total scale scores. A…
Descriptors: Test Items, Test Bias, Item Response Theory, Surveys
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deng, Weiling; Monfils, Lora – ETS Research Report Series, 2017
Using simulated data, this study examined the impact of different levels of stringency of the valid case inclusion criterion on item response theory (IRT)-based true score equating over 5 years in the context of K-12 assessment when growth in student achievement is expected. Findings indicate that the use of the most stringent inclusion criterion…
Descriptors: Item Response Theory, Equated Scores, True Scores, Educational Assessment
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Song, Huan; Xu, Miao – ECNU Review of Education, 2019
Purpose: From the perspective of performance standards-based teacher education, this article aimed to address progress and challenges of China's teacher preparation quality assurance system. Design/Approach/Methods: This review is based on policy review and case studies. Retrieving to the existing research literature, this research sorted out the…
Descriptors: Quality Assurance, Educational Quality, Educational Policy, Standards
Peer reviewed Peer reviewed
Direct linkDirect link
Magis, David; Tuerlinckx, Francis; De Boeck, Paul – Journal of Educational and Behavioral Statistics, 2015
This article proposes a novel approach to detect differential item functioning (DIF) among dichotomously scored items. Unlike standard DIF methods that perform an item-by-item analysis, we propose the "LR lasso DIF method": logistic regression (LR) model is formulated for all item responses. The model contains item-specific intercepts,…
Descriptors: Test Bias, Test Items, Regression (Statistics), Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Xiaomin; Wang, Wen-Chung – Journal of Educational Measurement, 2015
The assessment of differential item functioning (DIF) is routinely conducted to ensure test fairness and validity. Although many DIF assessment methods have been developed in the context of classical test theory and item response theory, they are not applicable for cognitive diagnosis models (CDMs), as the underlying latent attributes of CDMs are…
Descriptors: Test Bias, Models, Cognitive Measurement, Evaluation Methods
Pages: 1  |  ...  |  30  |  31  |  32  |  33  |  34  |  35  |  36  |  37  |  38  |  ...  |  248