NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 18 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kyung-Mi O. – Language Testing in Asia, 2024
This study examines the efficacy of artificial intelligence (AI) in creating parallel test items compared to human-made ones. Two test forms were developed: one consisting of 20 existing human-made items and another with 20 new items generated with ChatGPT assistance. Expert reviews confirmed the content parallelism of the two test forms.…
Descriptors: Comparative Analysis, Artificial Intelligence, Computer Software, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mimi Ismail; Ahmed Al - Badri; Said Al - Senaidi – Journal of Education and e-Learning Research, 2025
This study aimed to reveal the differences in individuals' abilities, their standard errors, and the psychometric properties of the test according to the two methods of applying the test (electronic and paper). The descriptive approach was used to achieve the study's objectives. The study sample consisted of 74 male and female students at the…
Descriptors: Achievement Tests, Computer Assisted Testing, Psychometrics, Item Response Theory
Steven R. Hiner – ProQuest LLC, 2023
The purpose of this study was to determine if there were significant statistical differences between scores on constructed response and computer-scorable questions on an accelerated middle school math placement test in a large urban school district in Ohio, and to ensure that all students have an opportunity to take the test. Five questions on a…
Descriptors: Scores, Middle Schools, Mathematics Tests, Placement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Moro, Sérgio; Martins, António; Ramos, Pedro; Esmerado, Joaquim; Costa, Joana Martinho; Almeida, Daniela – Computers in the Schools, 2020
Many university programs include Microsoft Excel courses given their value as a scientific and technical tool. However, evaluating what is effectively learned by students is a challenging task. Considering multiple-choice written exams are a standard evaluation format, this study aimed to uncover the features influencing students' success in…
Descriptors: Multiple Choice Tests, Test Items, Spreadsheets, Computer Software
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Liao, Linyu – English Language Teaching, 2020
As a high-stakes standardized test, IELTS is expected to have comparable forms of test papers so that test takers from different test administration on different dates receive comparable test scores. Therefore, this study examined the text difficulty and task characteristics of four parallel academic IELTS reading tests to reveal to what extent…
Descriptors: Second Language Learning, English (Second Language), Language Tests, High Stakes Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Nicklin, Christopher; Vitta, Joseph P. – Language Testing, 2022
Instrument measurement conducted with Rasch analysis is a common process in language assessment research. A recent systematic review of 215 studies involving Rasch analysis in language testing and applied linguistics research reported that 23 different software packages had been utilized. However, none of the analyses were conducted with one of…
Descriptors: Programming Languages, Vocabulary Development, Language Tests, Computer Software
Mullis, Ina V. S., Ed.; Martin, Michael O., Ed.; von Davier, Matthias, Ed. – International Association for the Evaluation of Educational Achievement, 2021
TIMSS (Trends in International Mathematics and Science Study) is a long-standing international assessment of mathematics and science at the fourth and eighth grades that has been collecting trend data every four years since 1995. About 70 countries use TIMSS trend data for monitoring the effectiveness of their education systems in a global…
Descriptors: Achievement Tests, International Assessment, Science Achievement, Mathematics Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Lesnov, Roman Olegovich – International Journal of Computer-Assisted Language Learning and Teaching, 2018
This article compares second language test-takers' performance on an academic listening test in an audio-only mode versus an audio-video mode. A new method of classifying video-based visuals was developed and piloted, which used L2 expert opinions to place the video on a continuum from being content-deficient (not helpful for answering…
Descriptors: Second Language Learning, Second Language Instruction, Video Technology, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wen-Chung; Chen, Hui-Fang; Jin, Kuan-Yu – Educational and Psychological Measurement, 2015
Many scales contain both positively and negatively worded items. Reverse recoding of negatively worded items might not be enough for them to function as positively worded items do. In this study, we commented on the drawbacks of existing approaches to wording effect in mixed-format scales and used bi-factor item response theory (IRT) models to…
Descriptors: Item Response Theory, Test Format, Language Usage, Test Items
Peay, Edmund R. – 1982
The method for questionnaire construction described in this paper makes it convenient to generate as many different forms for a questionnaire as there are respondents. The method is based on using the computer to produce the questionnaire forms themselves. In this way the items or subgroups of items of the questionnaire may be randomly ordered or…
Descriptors: Computer Assisted Testing, Computer Software, Questionnaires, Sampling
Eiser, Leslie – Classroom Computer Learning, 1988
Discusses the advantages and disadvantages of test generation programs. Includes setting up, printing exams and "bells and whistles." Reviews eight computer packages for Apple and IBM personal computers. Compares features, costs, and usage. (CW)
Descriptors: Computer Software, Computer Software Reviews, Computer Uses in Education, Elementary Secondary Education
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Scalise, Kathleen; Gifford, Bernard – Journal of Technology, Learning, and Assessment, 2006
Technology today offers many new opportunities for innovation in educational assessment through rich new assessment tasks and potentially powerful scoring, reporting and real-time feedback mechanisms. One potential limitation for realizing the benefits of computer-based assessment in both instructional assessment and large scale testing comes in…
Descriptors: Electronic Learning, Educational Assessment, Information Technology, Classification
Peer reviewed Peer reviewed
Aiken, Lewis R. – Educational and Psychological Measurement, 1989
Two alternatives to traditional item analysis and reliability estimation procedures are considered for determining the difficulty, discrimination, and reliability of optional items on essay and other tests. A computer program to compute these measures is described, and illustrations are given. (SLD)
Descriptors: College Entrance Examinations, Computer Software, Difficulty Level, Essay Tests
Brodeur, Doris R. – Educational Technology, 1986
Reviews seven commercially produced test generator programs appropriate for use by classroom teachers or individual instructors and identifies item construction and test formatting features that facilitate test design and delivery. Test generator programs and their manufacturers are listed. (MBR)
Descriptors: Computer Assisted Testing, Computer Software, Costs, Evaluation Criteria
Peer reviewed Peer reviewed
Vockell, Edward L.; Hall, Jane – Social Studies, 1989
Examines the ways in which computers can assist teachers in developing good tests. Describes the program TESTWORKS in detail and provides charts comparing this program with 11 others in the areas of price, type of questions generated, computer functions, and the usefulness of each. Discusses the use of word processors and databases. (KO)
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Software, Computer Uses in Education
Previous Page | Next Page »
Pages: 1  |  2