NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 64 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Harold Doran; Testsuhiro Yamada; Ted Diaz; Emre Gonulates; Vanessa Culver – Journal of Educational Measurement, 2025
Computer adaptive testing (CAT) is an increasingly common mode of test administration offering improved test security, better measurement precision, and the potential for shorter testing experiences. This article presents a new item selection algorithm based on a generalized objective function to support multiple types of testing conditions and…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Algorithms
Peer reviewed Peer reviewed
Direct linkDirect link
Dahl, Laura S.; Staples, B. Ashley; Mayhew, Matthew J.; Rockenbach, Alyssa N. – Innovative Higher Education, 2023
Surveys with rating scales are often used in higher education research to measure student learning and development, yet testing and reporting on the longitudinal psychometric properties of these instruments is rare. Rasch techniques allow scholars to map item difficulty and individual aptitude on the same linear, continuous scale to compare…
Descriptors: Surveys, Rating Scales, Higher Education, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Fournier, Geneviève; Lachance, Lise; Viviers, Simon; Lahrizi, Imane Zineb; Goyer, Liette; Masdonati, Jonas – International Journal for Educational and Vocational Guidance, 2020
The paper presents first the theoretical foundations used to develop a pre-experimental version of a questionnaire on relationship to work, and then the four stages of its initial validation leading to an experimental version. These stages included: (1) Defining the dimensions and sub-dimensions of the relationship to work concept; (2)…
Descriptors: Test Construction, Content Validity, Work Attitudes, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Yunxiao; Lee, Yi-Hsuan; Li, Xiaoou – Journal of Educational and Behavioral Statistics, 2022
In standardized educational testing, test items are reused in multiple test administrations. To ensure the validity of test scores, the psychometric properties of items should remain unchanged over time. In this article, we consider the sequential monitoring of test items, in particular, the detection of abrupt changes to their psychometric…
Descriptors: Standardized Tests, Test Items, Test Validity, Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zehner, Fabian; Eichmann, Beate; Deribo, Tobias; Harrison, Scott; Bengs, Daniel; Andersen, Nico; Hahnel, Carolin – Journal of Educational Data Mining, 2021
The NAEP EDM Competition required participants to predict efficient test-taking behavior based on log data. This paper describes our top-down approach for engineering features by means of psychometric modeling, aiming at machine learning for the predictive classification task. For feature engineering, we employed, among others, the Log-Normal…
Descriptors: National Competency Tests, Engineering Education, Data Collection, Data Analysis
Oranje, Andreas; Kolstad, Andrew – Journal of Educational and Behavioral Statistics, 2019
The design and psychometric methodology of the National Assessment of Educational Progress (NAEP) is constantly evolving to meet the changing interests and demands stemming from a rapidly shifting educational landscape. NAEP has been built on strong research foundations that include conducting extensive evaluations and comparisons before new…
Descriptors: National Competency Tests, Psychometrics, Statistical Analysis, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Sadeghi, Karim; Abolfazli Khonbi, Zainab – Language Testing in Asia, 2017
As perfectly summarised by Ida Lawrence, "Testing is growing by leaps and bounds across the world. There is a realization that a nation's well-being depends crucially on the educational achievement of its population. Valid tests are an essential tool to evaluate a nation's educational standing and to implement efficacious educational reforms.…
Descriptors: Test Items, Item Response Theory, Computer Assisted Testing, Adaptive Testing
College Board, 2023
Over the past several years, content experts, psychometricians, and researchers have been hard at work developing, refining, and studying the digital SAT. The work is grounded in foundational best practices and advances in measurement and assessment design, with fairness for students informing all of the work done. This paper shares learnings from…
Descriptors: College Entrance Examinations, Psychometrics, Computer Assisted Testing, Best Practices
Peer reviewed Peer reviewed
Direct linkDirect link
Embretson, Susan E. – Educational Measurement: Issues and Practice, 2016
Examinees' thinking processes have become an increasingly important concern in testing. The responses processes aspect is a major component of validity, and contemporary tests increasingly involve specifications about the cognitive complexity of examinees' response processes. Yet, empirical research findings on examinees' cognitive processes are…
Descriptors: Testing, Cognitive Processes, Test Construction, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Boone, William J.; Noltemeyer, Amity – Cogent Education, 2017
In order to progress as a field, school psychology research must be informed by effective measurement techniques. One approach to address the need for careful measurement is Rasch analysis. This technique can (a) facilitate the development of instruments that provide useful data, (b) provide data that can be used confidently for both descriptive…
Descriptors: Item Response Theory, School Psychology, School Psychologists, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Dickison, Philip; Luo, Xiao; Kim, Doyoung; Woo, Ada; Muntean, William; Bergstrom, Betty – Journal of Applied Testing Technology, 2016
Designing a theory-based assessment with sound psychometric qualities to measure a higher-order cognitive construct is a highly desired yet challenging task for many practitioners. This paper proposes a framework for designing a theory-based assessment to measure a higher-order cognitive construct. This framework results in a modularized yet…
Descriptors: Thinking Skills, Cognitive Tests, Test Construction, Nursing
Partnership for Assessment of Readiness for College and Careers, 2015
The Partnership for Assessment of Readiness for College and Careers (PARCC) is a group of states working together to develop a modern assessment that replaces previous state standardized tests. It provides better information for teachers and parents to identify where a student needs help, or is excelling, so they are able to enhance instruction to…
Descriptors: Literacy, Language Arts, Scoring Formulas, Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sabatini, John; Petscher, Yaacov; O'Reilly, Tenaha; Truckenmiller, Adrea – Grantee Submission, 2015
For decades, standardized reading comprehension tests have consisted of a series of passages and associated multiple-choice questions. Although widely used in and out of the classroom, there continues to be considerable disagreement regarding how or whether such tests have net value in the service of advancing educational progress in reading. This…
Descriptors: Middle School Students, High School Students, Reading Comprehension, Reading Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Haro, Elizabeth K.; Haro, Luis S. – Journal of Chemical Education, 2014
The multiple-choice question (MCQ) is the foundation of knowledge assessment in K-12, higher education, and standardized entrance exams (including the GRE, MCAT, and DAT). However, standard MCQ exams are limited with respect to the types of questions that can be asked when there are only five choices. MCQs offering additional choices more…
Descriptors: Multiple Choice Tests, Coding, Scoring Rubrics, Test Scoring Machines
Peer reviewed Peer reviewed
Direct linkDirect link
Towns, Marcy H. – Journal of Chemical Education, 2014
Chemistry faculty members are highly skilled in obtaining, analyzing, and interpreting physical measurements, but often they are less skilled in measuring student learning. This work provides guidance for chemistry faculty from the research literature on multiple-choice item development in chemistry. Areas covered include content, stem, and…
Descriptors: Multiple Choice Tests, Test Construction, Psychometrics, Test Items
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5