NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 841 to 855 of 9,552 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Jo Lein; Jennifer Gripado – Learning Professional, 2024
There are many valuable sources of evaluation data, including -- but not limited to -- professional learning participants. In the authors' work on leadership development and organizational learning for Tulsa Public Schools in Oklahoma, they regularly ask educators to share feedback and perceptions of usefulness of their professional learning. The…
Descriptors: Participant Satisfaction, Surveys, Test Items, Feedback (Response)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Cobern, William W.; Adams, Betty A. J. – International Journal of Assessment Tools in Education, 2020
What follows is a practical guide for establishing the validity of a survey for research purposes. The motivation for providing this guide is our observation that researchers, not necessarily being survey researchers per se, but wanting to use a survey method, lack a concise resource on validity. There is far more to know about surveys and survey…
Descriptors: Surveys, Test Validity, Test Construction, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Antino, Mirko; Alvarado, Jesús M.; Asún, Rodrigo A.; Bliese, Paul – Sociological Methods & Research, 2020
The need to determine the correct dimensionality of theoretical constructs and generate valid measurement instruments when underlying items are categorical has generated a significant volume of research in the social sciences. This article presents two studies contrasting different categorical exploratory techniques. The first study compares…
Descriptors: Nonparametric Statistics, Factor Analysis, Item Analysis, Robustness (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Köhler, Carmen; Robitzsch, Alexander; Hartig, Johannes – Journal of Educational and Behavioral Statistics, 2020
Testing whether items fit the assumptions of an item response theory model is an important step in evaluating a test. In the literature, numerous item fit statistics exist, many of which show severe limitations. The current study investigates the root mean squared deviation (RMSD) item fit statistic, which is used for evaluating item fit in…
Descriptors: Test Items, Goodness of Fit, Statistics, Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Abdelhamid, Gomaa S. M.; Gómez-Benito, Juana; Abdeltawwab, Ahmed T. M.; Abu Bakr, Mostafa H. S.; Kazem, Amina M. – Journal of Psychoeducational Assessment, 2020
The fourth edition of the Wechsler Adult Intelligence Scale (WAIS-IV) has been used extensively for assessing adult intelligence. This study uses Mokken scale analysis to investigate the psychometric proprieties of WAIS-IV subtests adapted for the Egyptian population in a sample of 250 adults between 18 and 25 years of age. The monotone…
Descriptors: Foreign Countries, Item Analysis, Adults, Intelligence Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hou, Likun; Terzi, Ragip; de la Torre, Jimmy – International Journal of Assessment Tools in Education, 2020
This study aims to conduct differential item functioning analyses in the context of cognitive diagnosis assessments using various formulations of the Wald test. In implementing the Wald test, two scenarios are considered: one where the underlying reduced model can be assumed; and another where a saturated CDM is used. Illustration of the different…
Descriptors: Cognitive Measurement, Diagnostic Tests, Item Response Theory, Models
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Adom, Dickson; Mensah, Jephtar Adu; Dake, Dennis Atsu – International Journal of Evaluation and Research in Education, 2020
Test, measurement, and evaluation are concepts used in education to explain how the progress of learning and the final learning outcomes of students are assessed. However, the terms are often misused in the field of education, especially in Ghana. The objective of the study was to thoroughly explain the concepts to assist educationists and…
Descriptors: Foreign Countries, Educational Research, Evaluation Methods, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Wyse, Adam E.; Babcock, Ben – Educational Measurement: Issues and Practice, 2020
A common belief is that the Bookmark method is a cognitively simpler standard-setting method than the modified Angoff method. However, a limited amount of research has investigated panelist's ability to perform well the Bookmark method, and whether some of the challenges panelists face with the Angoff method may also be present in the Bookmark…
Descriptors: Standard Setting (Scoring), Evaluation Methods, Testing Problems, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Ulitzsch, Esther; von Davier, Matthias; Pohl, Steffi – Educational and Psychological Measurement, 2020
So far, modeling approaches for not-reached items have considered one single underlying process. However, missing values at the end of a test can occur for a variety of reasons. On the one hand, examinees may not reach the end of a test due to time limits and lack of working speed. On the other hand, examinees may not attempt all items and quit…
Descriptors: Item Response Theory, Test Items, Response Style (Tests), Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Traynor, Anne; Li, Tingxuan; Zhou, Shuqi – Applied Measurement in Education, 2020
During the development of large-scale school achievement tests, panels of independent subject-matter experts use systematic judgmental methods to rate the correspondence between a given test's items and performance objective statements. The individual experts' ratings may then be used to compute summary indices to quantify the match between a…
Descriptors: Alignment (Education), Achievement Tests, Curriculum, Error of Measurement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dhavala, Soma; Bhatia, Chirag; Bose, Joy; Faldu, Keyur; Avasthi, Aditi – International Educational Data Mining Society, 2020
A good diagnostic assessment is one that can (i) discriminate between students of different abilities for a given skill set, (ii) be consistent with ground truth data and (iii) achieve this with as few assessment questions as possible. In this paper, we explore a method to meet these objectives. This is achieved by selecting questions from a…
Descriptors: Automation, Diagnostic Tests, Test Construction, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Lozano, José H.; Revuelta, Javier – Educational and Psychological Measurement, 2023
The present paper introduces a general multidimensional model to measure individual differences in learning within a single administration of a test. Learning is assumed to result from practicing the operations involved in solving the items. The model accounts for the possibility that the ability to learn may manifest differently for correct and…
Descriptors: Bayesian Statistics, Learning Processes, Test Items, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Uminski, Crystal; Hubbard, Joanna K.; Couch, Brian A. – CBE - Life Sciences Education, 2023
Biology instructors use concept assessments in their courses to gauge student understanding of important disciplinary ideas. Instructors can choose to administer concept assessments based on participation (i.e., lower stakes) or the correctness of responses (i.e., higher stakes), and students can complete the assessment in an in-class or…
Descriptors: Biology, Science Tests, High Stakes Tests, Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wolkowitz, Amanda A.; Foley, Brett; Zurn, Jared – Practical Assessment, Research & Evaluation, 2023
The purpose of this study is to introduce a method for converting scored 4-option multiple-choice (MC) items into scored 3-option MC items without re-pretesting the 3-option MC items. This study describes a six-step process for achieving this goal. Data from a professional credentialing exam was used in this study and the method was applied to 24…
Descriptors: Multiple Choice Tests, Test Items, Accuracy, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Parker, Mark A. J.; Hedgeland, Holly; Jordan, Sally E.; Braithwaite, Nicholas St. J. – European Journal of Science and Mathematics Education, 2023
The study covers the development and testing of the alternative mechanics survey (AMS), a modified force concept inventory (FCI), which used automatically marked free-response questions. Data were collected over a period of three academic years from 611 participants who were taking physics classes at high school and university level. A total of…
Descriptors: Test Construction, Scientific Concepts, Physics, Test Reliability
Pages: 1  |  ...  |  53  |  54  |  55  |  56  |  57  |  58  |  59  |  60  |  61  |  ...  |  637