NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)1
Since 2006 (last 20 years)10
Publication Type
Reports - Evaluative13
Journal Articles12
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 13 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Yuan; Hau, Kit-Tai – Educational and Psychological Measurement, 2020
In large-scale low-stake assessment such as the Programme for International Student Assessment (PISA), students may skip items (missingness) which are within their ability to complete. The detection and taking care of these noneffortful responses, as a measure of test-taking motivation, is an important issue in modern psychometric models.…
Descriptors: Response Style (Tests), Motivation, Test Items, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Hsieh, Feng-Jui – International Journal of Science and Mathematics Education, 2013
This paper discusses different conceptual frameworks for measuring mathematics pedagogical content knowledge (MPCK) in international comparison studies. Two large-scale international comparative studies, "Mathematics Teaching in the Twenty-First Century" (MT21; Schmidt et al., 2011) and the "Teacher Education and Development Study…
Descriptors: Pedagogical Content Knowledge, Mathematics Teachers, Mathematics Instruction, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Yen, Yung-Chin; Ho, Rong-Guey; Laio, Wen-Wei; Chen, Li-Ju; Kuo, Ching-Chin – Applied Psychological Measurement, 2012
In a selected response test, aberrant responses such as careless errors and lucky guesses might cause error in ability estimation because these responses do not actually reflect the knowledge that examinees possess. In a computerized adaptive test (CAT), these aberrant responses could further cause serious estimation error due to dynamic item…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Response Style (Tests)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yang, Chih-Wei; Kuo, Bor-Chen; Liao, Chen-Huei – Turkish Online Journal of Educational Technology - TOJET, 2011
The aim of the present study was to develop an on-line assessment system with constructed response items in the context of elementary mathematics curriculum. The system recorded the problem solving process of constructed response items and transfered the process to response codes for further analyses. An inference mechanism based on artificial…
Descriptors: Foreign Countries, Mathematics Curriculum, Test Items, Problem Solving
Peer reviewed Peer reviewed
Direct linkDirect link
Wuang, Yee-Pay; Wang, Li-Chen; Su, Chwen-Yng – Research in Developmental Disabilities: A Multidisciplinary Journal, 2010
The aim of this study was to examine the validation of the Hooper Visual Organization Test (HVOT) for use in children by testing for item fit, unidimensionality, item hierarchy, reliability, and screening capacity. A modified scoring system was devised for the HVOT so that children received some credit for being able to describe the function of…
Descriptors: Test Bias, Down Syndrome, Scoring, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Li-Ju; Ho, Rong-Guey; Yen, Yung-Chin – Educational Technology & Society, 2010
This study aimed to explore the effects of marking and metacognition-evaluated feedback (MEF) in computer-based testing (CBT) on student performance and review behavior. Marking is a strategy, in which students place a question mark next to a test item to indicate an uncertain answer. The MEF provided students with feedback on test results…
Descriptors: Feedback (Response), Test Results, Test Items, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Wu, Pei-Chen; Chang, Lily – Measurement and Evaluation in Counseling and Development, 2008
The authors investigated the Chinese version of the Beck Depression Inventory-II (BDI-II-C; Chinese Behavioral Science Corporation, 2000) within the Rasch framework in terms of dimensionality, item difficulty, and category functioning. Two underlying scale dimensions, relatively high item difficulties, and a need for collapsing 2 response…
Descriptors: Test Items, Foreign Countries, Psychometrics, Behavioral Sciences
Peer reviewed Peer reviewed
Direct linkDirect link
Su, C. Y.; Wang, T. I. – Computers & Education, 2010
The rapid advance of information and communication technologies (ICT) has important impacts on teaching and learning, as well as on the educational assessment. Teachers may create assessments utilizing some developed assessment software or test authoring tools. However, problems could occur, such as neglecting key concepts in the curriculum or…
Descriptors: Test Items, Educational Assessment, Course Content, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Chang, Wen-Chih; Yang, Hsuan-Che; Shih, Timothy K.; Chao, Louis R. – International Journal of Distance Education Technologies, 2009
E-learning provides a convenient and efficient way for learning. Formative assessment not only guides student in instruction and learning, diagnose skill or knowledge gaps, but also measures progress and evaluation. An efficient and convenient e-learning formative assessment system is the key character for e-learning. However, most e-learning…
Descriptors: Electronic Learning, Student Evaluation, Formative Evaluation, Educational Objectives
Wang, Wen-chung – 1997
Traditional approaches to the investigation of the objectivity of ratings for constructed-response items are based on classical test theory, which is item-dependent and sample-dependent. Item response theory overcomes this drawback by decomposing item difficulties into genuine difficulties and rater severity. In so doing, objectivity of ability…
Descriptors: College Entrance Examinations, Constructed Response, Foreign Countries, Interrater Reliability
Peer reviewed Peer reviewed
Wang, Wen-Chung – Journal of Applied Measurement, 2000
Proposes a factorial procedure for investigating differential distractor functioning in multiple choice items that models each distractor with a distinct distractibility parameter. Results of a simulation study show that the parameters of the proposed modeling were recovered very well. Analysis of 10 4-choice items from a college entrance…
Descriptors: College Entrance Examinations, Distractors (Tests), Factor Structure, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Beckert, Troy E.; Strom, Robert D.; Strom, Paris S.; Yang, Cheng-Ta; Singh, Archana – Educational and Psychological Measurement, 2007
This study examined whether the original factor structure of the Parent Success Indicator (PSI) could be replicated with scores from generational views on both the English- and Mandarin-language versions of the instrument. The 60-item PSI was evaluated using responses from 840 Taiwanese parents (n = 429) and their 10- to 14-year-old adolescents (n…
Descriptors: Goodness of Fit, Adolescents, Factor Structure, Success
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Chao-Lin – Educational Technology & Society, 2005
The author analyzes properties of mutual information between dichotomous concepts and test items. The properties generalize some common intuitions about item comparison, and provide principled foundations for designing item-selection heuristics for student assessment in computer-assisted educational systems. The proposed item-selection strategies…
Descriptors: Test Items, Heuristics, Classification, Item Analysis