NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 376 to 390 of 1,057 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ishii, Takatoshi; Songmuang, Pokpong; Ueno, Maomi – IEEE Transactions on Learning Technologies, 2014
Educational assessments occasionally require uniform test forms for which each test form comprises a different set of items, but the forms meet equivalent test specifications (i.e., qualities indicated by test information functions based on item response theory). We propose two maximum clique algorithms (MCA) for uniform test form assembly. The…
Descriptors: Simulation, Efficiency, Test Items, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Seo, Dong Gi; Weiss, David J. – Educational and Psychological Measurement, 2015
Most computerized adaptive tests (CATs) have been studied using the framework of unidimensional item response theory. However, many psychological variables are multidimensional and might benefit from using a multidimensional approach to CATs. This study investigated the accuracy, fidelity, and efficiency of a fully multidimensional CAT algorithm…
Descriptors: Computer Assisted Testing, Adaptive Testing, Accuracy, Fidelity
Peer reviewed Peer reviewed
Direct linkDirect link
He, Lianzhen; Min, Shangchao – Language Assessment Quarterly, 2017
The first aim of this study was to develop a computer adaptive EFL test (CALT) that assesses test takers' listening and reading proficiency in English with dichotomous items and polytomous testlets. We reported in detail on the development of the CALT, including item banking, determination of suitable item response theory (IRT) models for item…
Descriptors: Computer Assisted Testing, Adaptive Testing, English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Han, Kyung T. – Applied Psychological Measurement, 2013
Most computerized adaptive testing (CAT) programs do not allow test takers to review and change their responses because it could seriously deteriorate the efficiency of measurement and make tests vulnerable to manipulative test-taking strategies. Several modified testing methods have been developed that provide restricted review options while…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Pelánek, Radek; Rihák, Ji?rí – International Educational Data Mining Society, 2016
In online educational systems we can easily collect and analyze extensive data about student learning. Current practice, however, focuses only on some aspects of these data, particularly on correctness of students answers. When a student answers incorrectly, the submitted wrong answer can give us valuable information. We provide an overview of…
Descriptors: Foreign Countries, Online Systems, Geography, Anatomy
Peer reviewed Peer reviewed
Direct linkDirect link
Shermis, Mark D.; Mao, Liyang; Mulholland, Matthew; Kieftenbeld, Vincent – International Journal of Testing, 2017
This study uses the feature sets employed by two automated scoring engines to determine if a "linguistic profile" could be formulated that would help identify items that are likely to exhibit differential item functioning (DIF) based on linguistic features. Sixteen items were administered to 1200 students where demographic information…
Descriptors: Computer Assisted Testing, Scoring, Hypothesis Testing, Essays
Çetinavci, Ugur Recep; Öztürk, Ismet – Online Submission, 2017
Pragmatic competence is among the explicitly acknowledged sub-competences that make the communicative competence in any language (Bachman & Palmer, 1996; Council of Europe, 2001). Within the notion of pragmatic competence itself, "implicature (implied meanings)" comes to the fore as one of the five main areas there (Levinson, 1983).…
Descriptors: Test Construction, Computer Assisted Testing, Communicative Competence (Languages), Second Language Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Mao, Xiuzhen; Xin, Tao – Applied Psychological Measurement, 2013
The Monte Carlo approach which has previously been implemented in traditional computerized adaptive testing (CAT) is applied here to cognitive diagnostic CAT to test the ability of this approach to address multiple content constraints. The performance of the Monte Carlo approach is compared with the performance of the modified maximum global…
Descriptors: Monte Carlo Methods, Cognitive Tests, Diagnostic Tests, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Hsu, Chia-Ling; Wang, Wen-Chung; Chen, Shu-Ying – Applied Psychological Measurement, 2013
Interest in developing computerized adaptive testing (CAT) under cognitive diagnosis models (CDMs) has increased recently. CAT algorithms that use a fixed-length termination rule frequently lead to different degrees of measurement precision for different examinees. Fixed precision, in which the examinees receive the same degree of measurement…
Descriptors: Computer Assisted Testing, Adaptive Testing, Cognitive Tests, Diagnostic Tests
Mitchell, Alison M.; Truckenmiller, Adrea; Petscher, Yaacov – Communique, 2015
As part of the Race to the Top initiative, the United States Department of Education made nearly 1 billion dollars available in State Educational Technology grants with the goal of ramping up school technology. One result of this effort is that states, districts, and schools across the country are using computerized assessments to measure their…
Descriptors: Computer Assisted Testing, Educational Technology, Testing, Efficiency
Peer reviewed Peer reviewed
Direct linkDirect link
Greenhow, Martin – Teaching Mathematics and Its Applications, 2015
This article outlines some key issues for writing effective computer-aided assessment (CAA) questions in subjects with substantial mathematical or statistical content, especially the importance of control of random parameters and the encoding of wrong methods of solution (mal-rules) commonly used by students. The pros and cons of using CAA and…
Descriptors: Mathematics Instruction, Computer Assisted Testing, Educational Principles, Educational Practices
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rybanov, Alexander Aleksandrovich – Turkish Online Journal of Distance Education, 2013
Is offered the set of criteria for assessing efficiency of the process forming the answers to multiple-choice test items. To increase accuracy of computer-assisted testing results, it is suggested to assess dynamics of the process of forming the final answer using the following factors: loss of time factor and correct choice factor. The model…
Descriptors: Evaluation Criteria, Efficiency, Multiple Choice Tests, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Andjelic, Svetlana; Cekerevac, Zoran – Education and Information Technologies, 2014
This article presents the original model of the computer adaptive testing and grade formation, based on scientifically recognized theories. The base of the model is a personalized algorithm for selection of questions depending on the accuracy of the answer to the previous question. The test is divided into three basic levels of difficulty, and the…
Descriptors: Computer Assisted Testing, Educational Technology, Grades (Scholastic), Test Construction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wang, Chu-Fu; Lin, Chih-Lung; Deng, Jien-Han – Turkish Online Journal of Educational Technology - TOJET, 2012
Testing is an important stage of teaching as it can assist teachers in auditing students' learning results. A good test is able to accurately reflect the capability of a learner. Nowadays, Computer-Assisted Testing (CAT) is greatly improving traditional testing, since computers can automatically and quickly compose a proper test sheet to meet user…
Descriptors: Simulation, Test Items, Student Evaluation, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Edwards, Michael C.; Flora, David B.; Thissen, David – Applied Measurement in Education, 2012
This article describes a computerized adaptive test (CAT) based on the uniform item exposure multi-form structure (uMFS). The uMFS is a specialization of the multi-form structure (MFS) idea described by Armstrong, Jones, Berliner, and Pashley (1998). In an MFS CAT, the examinee first responds to a small fixed block of items. The items comprising…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Format, Test Items
Pages: 1  |  ...  |  22  |  23  |  24  |  25  |  26  |  27  |  28  |  29  |  30  |  ...  |  71