NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Location
Taiwan10
Laws, Policies, & Programs
Assessments and Surveys
Program for International…1
What Works Clearinghouse Rating
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Heng-Tsung Danny; Hung, Shao-Ting Alan; Chao, Hsiu-Yi; Chen, Jyun-Hong; Lin, Tsui-Peng; Shih, Ching-Lin – Language Assessment Quarterly, 2022
Prompted by Taiwanese university students' increasing demand for English proficiency assessment, the absence of a test designed specifically for this demographic subgroup, and the lack of a localized and freely-accessible proficiency measure, this project set out to develop and validate a computerized adaptive English proficiency testing (E-CAT)…
Descriptors: Computer Assisted Testing, English (Second Language), Second Language Learning, Second Language Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
He, Tung-hsien – SAGE Open, 2019
This study employed a mixed-design approach and the Many-Facet Rasch Measurement (MFRM) framework to investigate whether rater bias occurred between the onscreen scoring (OSS) mode and the paper-based scoring (PBS) mode. Nine human raters analytically marked scanned scripts and paper scripts using a six-category (i.e., six-criterion) rating…
Descriptors: Computer Assisted Testing, Scoring, Item Response Theory, Essays
Peer reviewed Peer reviewed
Direct linkDirect link
Kuo, Bor-Chen; Liao, Chen-Huei; Pai, Kai-Chih; Shih, Shu-Chuan; Li, Cheng-Hsuan; Mok, Magdalena Mo Ching – Educational Psychology, 2020
The current study explores students' collaboration and problem solving (CPS) abilities using a human-to-agent (H-A) computer-based collaborative problem solving assessment. Five CPS assessment units with 76 conversation-based items were constructed using the PISA 2015 CPS framework. In the experiment, 53,855 ninth and tenth graders in Taiwan were…
Descriptors: Computer Assisted Testing, Cooperative Learning, Problem Solving, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Yen, Yung-Chin; Ho, Rong-Guey; Liao, Wen-Wei; Chen, Li-Ju – Educational Technology & Society, 2012
In a test, the testing score would be closer to examinee's actual ability when careless mistakes were corrected. In CAT, however, changing the answer of one item in CAT might cause the following items no longer appropriate for estimating the examinee's ability. These inappropriate items in a reviewable CAT might in turn introduce bias in ability…
Descriptors: Foreign Countries, Adaptive Testing, Computer Assisted Testing, Item Response Theory
Liu, I-Fang; Ko, Hwa-Wei – International Association for Development of the Information Society, 2016
Perspectives from reading and information fields have identified similar skills belong to two different kind of literacy being online reading abilities and ICT skills. It causes a conflict between two research fields and increase difficult of integrating study results. The purpose of this study was to determine which views are suitable for…
Descriptors: Information Technology, Information Literacy, Computer Literacy, Reading Skills
Peer reviewed Peer reviewed
Direct linkDirect link
Kuo, Che-Yu; Wu, Hsin-Kai; Jen, Tsung-Hau; Hsu, Ying-Shao – International Journal of Science Education, 2015
The potential of computer-based assessments for capturing complex learning outcomes has been discussed; however, relatively little is understood about how to leverage such potential for summative and accountability purposes. The aim of this study is to develop and validate a multimedia-based assessment of scientific inquiry abilities (MASIA) to…
Descriptors: Multimedia Materials, Program Development, Program Validation, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Yen, Yung-Chin; Ho, Rong-Guey; Laio, Wen-Wei; Chen, Li-Ju; Kuo, Ching-Chin – Applied Psychological Measurement, 2012
In a selected response test, aberrant responses such as careless errors and lucky guesses might cause error in ability estimation because these responses do not actually reflect the knowledge that examinees possess. In a computerized adaptive test (CAT), these aberrant responses could further cause serious estimation error due to dynamic item…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Response Style (Tests)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yang, Chih-Wei; Kuo, Bor-Chen; Liao, Chen-Huei – Turkish Online Journal of Educational Technology - TOJET, 2011
The aim of the present study was to develop an on-line assessment system with constructed response items in the context of elementary mathematics curriculum. The system recorded the problem solving process of constructed response items and transfered the process to response codes for further analyses. An inference mechanism based on artificial…
Descriptors: Foreign Countries, Mathematics Curriculum, Test Items, Problem Solving
Peer reviewed Peer reviewed
Direct linkDirect link
Yen, Yung-Chin; Ho, Rong-Guey; Chen, Li-Ju; Chou, Kun-Yi; Chen, Yan-Lin – Educational Technology & Society, 2010
The purpose of this study was to examine whether the efficiency, precision, and validity of computerized adaptive testing (CAT) could be improved by assessing confidence differences in knowledge that examinees possessed. We proposed a novel polytomous CAT model called the confidence-weighting computerized adaptive testing (CWCAT), which combined a…
Descriptors: Foreign Countries, English (Second Language), Second Language Learning, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Lai, Ah-Fur; Chen, Deng-Jyi; Chen, Shu-Ling – Journal of Educational Multimedia and Hypermedia, 2008
The IRT (Item Response Theory) has been studied and applied in computer-based test for decades. However, almost of all these existing studies evaluated focus merely on test questions with text-based (or static text/graphic) type of presentation form illustrated exclusively. In this paper, we present our study on test questions using both…
Descriptors: Elementary School Students, Semantics, Difficulty Level, Item Response Theory