NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Practitioners1
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing all 13 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko – Measurement: Interdisciplinary Research and Perspectives, 2023
This software review discusses the capabilities of Stata to conduct item response theory modeling. The commands needed for fitting the popular one-, two-, and three-parameter logistic models are initially discussed. The procedure for testing the discrimination parameter equality in the one-parameter model is then outlined. The commands for fitting…
Descriptors: Item Response Theory, Models, Comparative Analysis, Item Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Padilla, Jose Luis; Hidalgo, M. Dolores; Benitez, Isabel; Gomez-Benito, Juana – Psicologica: International Journal of Methodology and Experimental Psychology, 2012
The analysis of differential item functioning (DIF) examines whether item responses differ according to characteristics such as language and ethnicity, when people with matching ability levels respond differently to the items. This analysis can be performed by calculating various statistics, one of the most important being the Mantel-Haenszel,…
Descriptors: Foreign Countries, Test Bias, Computer Software, Computer Software Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Alsubait, Tahani; Parsia, Bijan; Sattler, Uli – Research in Learning Technology, 2012
Different computational models for generating analogies of the form "A is to B as C is to D" have been proposed over the past 35 years. However, analogy generation is a challenging problem that requires further research. In this article, we present a new approach for generating analogies in Multiple Choice Question (MCQ) format that can be used…
Descriptors: Computer Assisted Testing, Programming, Computer Software, Computer Software Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Usener, Claus A.; Majchrzak, Tim A.; Kuchen, Herbert – Interactive Technology and Smart Education, 2012
Purpose: To overcome the high manual effort of assessments for teaching personnel, e-assessment systems are used to assess students using information systems (IS). The purpose of this paper is to propose an extension of EASy, a system for e-assessment of exercises that require higher-order cognitive skills. The latest module allows assessing…
Descriptors: Foreign Countries, Computer Software, Computer Software Evaluation, Computer Assisted Testing
Educational Testing Service, 2011
Choosing whether to test via computer is the most difficult and consequential decision the designers of a testing program can make. The decision is difficult because of the wide range of choices available. Designers can choose where and how often the test is made available, how the test items look and function, how those items are combined into…
Descriptors: Test Items, Testing Programs, Testing, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Su, C. Y.; Wang, T. I. – Computers & Education, 2010
The rapid advance of information and communication technologies (ICT) has important impacts on teaching and learning, as well as on the educational assessment. Teachers may create assessments utilizing some developed assessment software or test authoring tools. However, problems could occur, such as neglecting key concepts in the curriculum or…
Descriptors: Test Items, Educational Assessment, Course Content, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Yueh-Min; Lin, Yen-Ting; Cheng, Shu-Chen – Computers & Education, 2009
With the rapid growth of computer and mobile technology, it is a challenge to integrate computer based test (CBT) with mobile learning (m-learning) especially for formative assessment and self-assessment. In terms of self-assessment, computer adaptive test (CAT) is a proper way to enable students to evaluate themselves. In CAT, students are…
Descriptors: Self Evaluation (Individuals), Test Items, Formative Evaluation, Educational Assessment
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yang, Chih-Wei; Kuo, Bor-Chen; Liao, Chen-Huei – Turkish Online Journal of Educational Technology - TOJET, 2011
The aim of the present study was to develop an on-line assessment system with constructed response items in the context of elementary mathematics curriculum. The system recorded the problem solving process of constructed response items and transfered the process to response codes for further analyses. An inference mechanism based on artificial…
Descriptors: Foreign Countries, Mathematics Curriculum, Test Items, Problem Solving
Peer reviewed Peer reviewed
Direct linkDirect link
Kuo, Yen-Hung; Huang, Yueh-Min – Educational Technology & Society, 2009
Mobile learning (m-learning) is a new trend in the e-learning field. The learning services in m-learning environments are supported by fundamental functions, especially the content and assessment services, which need an authoring tool to rapidly generate adaptable learning resources. To fulfill the imperious demand, this study proposes an…
Descriptors: Test Items, Courseware, Computer Software Evaluation, Computer System Design
Peer reviewed Peer reviewed
Harasym, Peter H.; And Others – Journal of Educational Computing Research, 1993
Discussion of the use of human markers to mark responses on write-in questions focuses on a study that determined the feasibility of using a computer program to mark write-in responses for the Medical Council of Canada Qualifying Examination. The computer performance was compared with that of physician markers. (seven references) (LRW)
Descriptors: Comparative Analysis, Computer Assisted Testing, Computer Software Development, Computer Software Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Chao-Lin – Educational Technology & Society, 2005
The author analyzes properties of mutual information between dichotomous concepts and test items. The properties generalize some common intuitions about item comparison, and provide principled foundations for designing item-selection heuristics for student assessment in computer-assisted educational systems. The proposed item-selection strategies…
Descriptors: Test Items, Heuristics, Classification, Item Analysis
Peer reviewed Peer reviewed
Cohen, Steve; And Others – Journal of Educational and Behavioral Statistics, 1996
A detailed multisite evaluation of instructional software, the ConStatS package, designed to help students conceptualize introductory probability and statistics, yielded patterns of error on several assessment items. Results from 739 college students demonstrated 10 misconceptions that may be among the most difficult concepts to teach. (SLD)
Descriptors: College Students, Computer Assisted Instruction, Computer Software Evaluation, Educational Assessment
Peer reviewed Peer reviewed
Coniam, David – Hong Kong Journal of Applied Linguistics, 1998
Examines the process of developing an English-as-a-Second-Language (ESL) cloze test by computer based on a language corpus. Two such tests developed and pilot-tested in Hong Kong were found to have test items that were not as good as they would have been if designed by a competent human, but better than expected. (Author/MSE)
Descriptors: Applied Linguistics, Cloze Procedure, Computer Assisted Testing, Computer Software Evaluation