NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 2,161 to 2,175 of 9,552 results Save | Export
Sarah Lindstrom Johnson; Ray E. Reichenberg; Kathan Shukla; Tracy E. Waasdorp; Catherine P. Bradshaw – Grantee Submission, 2019
The United States government has become increasingly focused on school climate, as recently evidenced by its inclusion as an accountability indicator in the "Every Student Succeeds Act". Yet, there remains considerable variability in both conceptualizing and measuring school climate. To better inform the research and practice related to…
Descriptors: Item Response Theory, Educational Environment, Accountability, Educational Legislation
Peer reviewed Peer reviewed
Direct linkDirect link
Pishghadam, Reza; Baghaei, Purya; Seyednozadi, Zahra – International Journal of Testing, 2017
This article attempts to present emotioncy as a potential source of test bias to inform the analysis of test item performance. Emotioncy is defined as a hierarchy, ranging from "exvolvement" (auditory, visual, and kinesthetic) to "involvement" (inner and arch), to emphasize the emotions evoked by the senses. This study…
Descriptors: Test Bias, Item Response Theory, Test Items, Psychological Patterns
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Loukina, Anastassia; Zechner, Klaus; Yoon, Su-Youn; Zhang, Mo; Tao, Jidong; Wang, Xinhao; Lee, Chong Min; Mulholland, Matthew – ETS Research Report Series, 2017
This report presents an overview of the "SpeechRater"? automated scoring engine model building and evaluation process for several item types with a focus on a low-English-proficiency test-taker population. We discuss each stage of speech scoring, including automatic speech recognition, filtering models for nonscorable responses, and…
Descriptors: Automation, Scoring, Speech Tests, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hohensinn, Christine; Baghaei, Purya – Psicologica: International Journal of Methodology and Experimental Psychology, 2017
In large scale multiple-choice (MC) tests alternate forms of a test may be developed to prevent cheating by changing the order of items or by changing the position of the response options. The assumption is that since the content of the test forms are the same the order of items or the positions of the response options do not have any effect on…
Descriptors: Multiple Choice Tests, Test Format, Test Items, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L. – Educational Measurement: Issues and Practice, 2017
The rise of computer-based testing has brought with it the capability to measure more aspects of a test event than simply the answers selected or constructed by the test taker. One behavior that has drawn much research interest is the time test takers spend responding to individual multiple-choice items. In particular, very short response…
Descriptors: Guessing (Tests), Multiple Choice Tests, Test Items, Reaction Time
Peer reviewed Peer reviewed
Direct linkDirect link
Carney, Michele B.; Cavey, Laurie; Hughes, Gwyneth – Elementary School Journal, 2017
This article illustrates an argument-based approach to presenting validity evidence for assessment items intended to measure a complex construct. Our focus is developing a measure of teachers' ability to analyze and respond to students' mathematical thinking for the purpose of program evaluation. Our validity argument consists of claims addressing…
Descriptors: Mathematics Instruction, Mathematical Logic, Thinking Skills, Evidence
Peer reviewed Peer reviewed
Direct linkDirect link
Foss, Donald J.; Pirozzolo, Joseph W. – Journal of Educational Psychology, 2017
We carried out 4 semester-long studies of student performance in a college research methods course (total N = 588). Two sections of it were taught each semester with systematic and controlled differences between them. Key manipulations were repeated (with some variation) across the 4 terms, allowing assessment of replicability of effects.…
Descriptors: Undergraduate Students, Student Evaluation, Testing, Incidence
Peer reviewed Peer reviewed
Direct linkDirect link
Zaidi, Nikki L.; Swoboda, Christopher M.; Kelcey, Benjamin M.; Manuel, R. Stephen – Advances in Health Sciences Education, 2017
The extant literature has largely ignored a potentially significant source of variance in multiple mini-interview (MMI) scores by "hiding" the variance attributable to the sample of attributes used on an evaluation form. This potential source of hidden variance can be defined as rating items, which typically comprise an MMI evaluation…
Descriptors: Interviews, Scores, Generalizability Theory, Monte Carlo Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sahin, Alper; Anil, Duygu – Educational Sciences: Theory and Practice, 2017
This study investigates the effects of sample size and test length on item-parameter estimation in test development utilizing three unidimensional dichotomous models of item response theory (IRT). For this purpose, a real language test comprised of 50 items was administered to 6,288 students. Data from this test was used to obtain data sets of…
Descriptors: Test Length, Sample Size, Item Response Theory, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Byram, Jessica N.; Seifert, Mark F.; Brooks, William S.; Fraser-Cotlin, Laura; Thorp, Laura E.; Williams, James M.; Wilson, Adam B. – Anatomical Sciences Education, 2017
With integrated curricula and multidisciplinary assessments becoming more prevalent in medical education, there is a continued need for educational research to explore the advantages, consequences, and challenges of integration practices. This retrospective analysis investigated the number of items needed to reliably assess anatomical knowledge in…
Descriptors: Anatomy, Science Tests, Test Items, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Wooyeol; Cho, Sun-Joo – Applied Measurement in Education, 2017
Utilizing a longitudinal item response model, this study investigated the effect of item parameter drift (IPD) on item parameters and person scores via a Monte Carlo study. Item parameter recovery was investigated for various IPD patterns in terms of bias and root mean-square error (RMSE), and percentage of time the 95% confidence interval covered…
Descriptors: Item Response Theory, Test Items, Bias, Computation
Wang, Keyin – ProQuest LLC, 2017
The comparison of item-level computerized adaptive testing (CAT) and multistage adaptive testing (MST) has been researched extensively (e.g., Kim & Plake, 1993; Luecht et al., 1996; Patsula, 1999; Jodoin, 2003; Hambleton & Xing, 2006; Keng, 2008; Zheng, 2012). Various CAT and MST designs have been investigated and compared under the same…
Descriptors: Comparative Analysis, Computer Assisted Testing, Adaptive Testing, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Ajello, Anna Maria; Caponera, Elisa; Palmerio, Laura – European Journal of Psychology of Education, 2018
In Italy, from the 2003 reports to the present, the National Institute for the Educational Evaluation of Instruction and Training (INVALSI) has conducted research on Programme for International Student Assessment (PISA) results in order to understand Italian students' low achievement in mathematics. In the present paper, data from a representative…
Descriptors: Foreign Countries, Achievement Tests, International Assessment, Secondary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, HyeSun – Applied Measurement in Education, 2018
The current simulation study examined the effects of Item Parameter Drift (IPD) occurring in a short scale on parameter estimates in multilevel models where scores from a scale were employed as a time-varying predictor to account for outcome scores. Five factors, including three decisions about IPD, were considered for simulation conditions. It…
Descriptors: Test Items, Hierarchical Linear Modeling, Predictor Variables, Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Denizli, Zeynep Akkurt; Erdogan, Abdulkadir – Journal on Mathematics Education, 2018
This study aimed to develop a three-dimensional geometric thinking test to determine the geometric thinking of early graders in the paper-pencil environment. First, we determined the components of three-dimensional geometric thinking and prepared questions for each component. Then, we conducted the pilot studies of the test at three stages in six…
Descriptors: Geometry, Mathematics Instruction, Spatial Ability, Teaching Methods
Pages: 1  |  ...  |  141  |  142  |  143  |  144  |  145  |  146  |  147  |  148  |  149  |  ...  |  637