NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 653 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
TsungHan Ho – Applied Measurement in Education, 2023
An operational multistage adaptive test (MST) requires the development of a large item bank and the effort to continuously replenish the item bank due to concerns about test security and validity over the long term. New items should be pretested and linked to the item bank before being used operationally. The linking item volume fluctuations in…
Descriptors: Bayesian Statistics, Regression (Statistics), Test Items, Pretesting
Peer reviewed Peer reviewed
Direct linkDirect link
Daniel Jurich; Chunyan Liu – Applied Measurement in Education, 2023
Screening items for parameter drift helps protect against serious validity threats and ensure score comparability when equating forms. Although many high-stakes credentialing examinations operate with small sample sizes, few studies have investigated methods to detect drift in small sample equating. This study demonstrates that several newly…
Descriptors: High Stakes Tests, Sample Size, Item Response Theory, Equated Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Daniel A. DeCino; Steven R. Chesnut; Phillip L. Waalkes; Reed N. Keen – Measurement and Evaluation in Counseling and Development, 2025
Objective: The purpose of this study was to develop and validate the Counselor Self-Reflection Inventory (CSRI) from a Transformative Learning Theory framework for counselors, and counselors-in-training to use in clinical and training settings. Method: A sample of 351, mostly female (86.89%), white (85.19%), counselors with MS or MA (88.08%)…
Descriptors: Test Construction, Test Validity, Test Reliability, Attitude Measures
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Irwin, Clare W.; Stafford, Erin T. – Regional Educational Laboratory Northeast & Islands, 2016
This guide describes a five-step collaborative process that educators can use with other educators, researchers, and content experts to write or adapt questions and develop surveys for education contexts. This process allows educators to leverage the expertise of individuals within and outside of their organization to ensure a high-quality survey…
Descriptors: Surveys, Test Construction, Educational Cooperation, Test Items
GED Testing Service, 2016
This guide is designed to help adult educators and administrators better understand the content of the GED® test. This guide is tailored to each test subject and highlights the test's item types, assessment targets, and guidelines for how items will be scored. This 2016 edition has been updated to include the most recent information about the…
Descriptors: Guidelines, Teaching Guides, High School Equivalency Programs, Test Items
Educational Testing Service, 2011
Choosing whether to test via computer is the most difficult and consequential decision the designers of a testing program can make. The decision is difficult because of the wide range of choices available. Designers can choose where and how often the test is made available, how the test items look and function, how those items are combined into…
Descriptors: Test Items, Testing Programs, Testing, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Chan, David W. – Gifted Child Quarterly, 2010
Data of item responses to the Impossible Figures Task (IFT) from 492 Chinese primary, secondary, and university students were analyzed using the dichotomous Rasch measurement model. Item difficulty estimates and person ability estimates located on the same logit scale revealed that the pooled sample of Chinese students, who were relatively highly…
Descriptors: Test Items, Adaptive Testing, Scaling, Talent Identification
Yuan, Kun; Le, Vi-Nhuan – RAND Corporation, 2014
In 2010, the William and Flora Hewlett Foundation's Education Program has established the Deeper Learning Initiative, which focuses on students' development of deeper learning skills (i.e., the mastery of core academic content, critical-thinking, problem-solving, collaboration, communication, and "learn-how-to-learn" skills). Two test…
Descriptors: Test Items, Cognitive Processes, Difficulty Level, Skill Development
Peer reviewed Peer reviewed
Direct linkDirect link
Jiao, Hong – Measurement: Interdisciplinary Research and Perspectives, 2009
Diagnostic assessment is currently an active research area in educational measurement. Literature related to diagnostic modeling has been in existence for several decades, but a great deal of research has been conducted within the last decade or so, especially within the last five years. The author summarizes the key components in the application…
Descriptors: Educational Assessment, Literature Reviews, Test Items, Probability
Nering, Michael L., Ed.; Ostini, Remo, Ed. – Routledge, Taylor & Francis Group, 2010
This comprehensive "Handbook" focuses on the most used polytomous item response theory (IRT) models. These models help us understand the interaction between examinees and test questions where the questions have various response categories. The book reviews all of the major models and includes discussions about how and where the models…
Descriptors: Guides, Item Response Theory, Test Items, Correlation
Brown, F. Dale; Mitchell, Thomas O. – Educational Technology, 1980
Presents three components of the tests with instructional feedback on slides system (TIFS), the needs of the three, background on the production of the test item reference card, and the advantage of the system for both instructor and student. (MER)
Descriptors: Feedback, Production Techniques, Slides, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wolf, Mikyung Kim; Herman, Joan L.; Kim, Jinok; Abedi, Jamal; Leon, Seth; Griffin, Noelle; Bachman, Patina L.; Chang, Sandy M.; Farnsworth, Tim; Jung, Hyekyung; Nollner, Julie; Shin, Hye Won – National Center for Research on Evaluation, Standards, and Student Testing (CRESST), 2008
This research project addresses the validity of assessments used to measure the performance of English language learners (ELLs), such as those mandated by the No Child Left Behind Act of 2001 (NCLB, 2002). The goals of the research are to help educators understand and improve ELL performance by investigating the validity of their current…
Descriptors: Validity, Second Language Learning, Researchers, Language Proficiency
Peer reviewed Peer reviewed
Joshi, Bhairav D. – Journal of Chemical Education, 1986
Provides a question (with the acceptable answer) designed to test students' ability to apply, and extend, the concept of thermodynamic work discussed in the classroom. The question was originally designed as a part of a take-home examination. (JN)
Descriptors: Chemistry, College Science, Higher Education, Science Education
Peer reviewed Peer reviewed
Fox, Robert A. – Journal of School Health, 1980
Some practical guidelines for developing multiple choice tests are offered. Included are three steps: (1) test design; (2) proper construction of test items; and (3) item analysis and evaluation. (JMF)
Descriptors: Guidelines, Objective Tests, Planning, Test Construction
Berk, Ronald A. – Educational Technology, 1980
Examines four factors involved in the determination of how many test items should be constructed or sampled for a set of objectives: (1) the type of decision to be made with results, (2) importance of objectives, (3) number of objectives, and (4) practical constraints. Specific guidelines that teachers and evaluators can use and an illustrative…
Descriptors: Behavioral Objectives, Criterion Referenced Tests, Guidelines, Test Construction
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  44