Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 8 |
Descriptor
Comparative Analysis | 25 |
Computer Assisted Testing | 25 |
Testing Problems | 25 |
Adaptive Testing | 8 |
Higher Education | 7 |
Scores | 7 |
Test Format | 7 |
Simulation | 6 |
Evaluation Methods | 5 |
Scoring | 5 |
Test Construction | 5 |
More ▼ |
Source
Author
Publication Type
Reports - Research | 17 |
Journal Articles | 12 |
Speeches/Meeting Papers | 8 |
Reports - Evaluative | 5 |
Reports - Descriptive | 2 |
Books | 1 |
Guides - Non-Classroom | 1 |
Opinion Papers | 1 |
Education Level
Higher Education | 3 |
Postsecondary Education | 2 |
Audience
Practitioners | 1 |
Researchers | 1 |
Teachers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
ACTFL Oral Proficiency… | 1 |
Graduate Record Examinations | 1 |
Indiana Statewide Testing for… | 1 |
International English… | 1 |
National Assessment of… | 1 |
National Longitudinal Study… | 1 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Isbell, Dan; Winke, Paula – Language Testing, 2019
The American Council on the Teaching of Foreign Languages (ACTFL) oral proficiency interview -- computer (OPIc) testing system represents an ambitious effort in language assessment: Assessing oral proficiency in over a dozen languages, on the same scale, from virtually anywhere at any time. Especially for users in contexts where multiple foreign…
Descriptors: Oral Language, Language Tests, Language Proficiency, Second Language Learning
Sinharay, Sandip; Wan, Ping; Choi, Seung W.; Kim, Dong-In – Journal of Educational Measurement, 2015
With an increase in the number of online tests, the number of interruptions during testing due to unexpected technical issues seems to be on the rise. For example, interruptions occurred during several recent state tests. When interruptions occur, it is important to determine the extent of their impact on the examinees' scores. Researchers such as…
Descriptors: Computer Assisted Testing, Testing Problems, Scores, Statistical Analysis
Ghilay, Yaron; Ghilay, Ruth – Journal of Educational Technology, 2012
The study examined advantages and disadvantages of computerised assessment compared to traditional evaluation. It was based on two samples of college students (n=54) being examined in computerised tests instead of paper-based exams. Students were asked to answer a questionnaire focused on test effectiveness, experience, flexibility and integrity.…
Descriptors: Student Evaluation, Higher Education, Comparative Analysis, Computer Assisted Testing
Makransky, Guido; Glas, Cees A. W. – International Journal of Testing, 2013
Cognitive ability tests are widely used in organizations around the world because they have high predictive validity in selection contexts. Although these tests typically measure several subdomains, testing is usually carried out for a single subdomain at a time. This can be ineffective when the subdomains assessed are highly correlated. This…
Descriptors: Foreign Countries, Cognitive Ability, Adaptive Testing, Feedback (Response)
Ventouras, Errikos; Triantis, Dimos; Tsiakas, Panagiotis; Stergiopoulos, Charalampos – Computers & Education, 2011
The aim of the present research was to compare the use of multiple-choice questions (MCQs) as an examination method against the oral examination (OE) method. MCQs are widely used and their importance seems likely to grow, due to their inherent suitability for electronic assessment. However, MCQs are influenced by the tendency of examinees to guess…
Descriptors: Grades (Scholastic), Scoring, Multiple Choice Tests, Test Format
Erfani, Shiva Seyed – English Language Teaching, 2012
One consequence of test use in the English-language teaching community is the negative washback of tests on teaching and learning. Test preparation courses are often seen as part of the more general issue of washback. IELTS and TOEFL iBT tests, focusing on communicative competence, are anticipated to have positive washback effect on how English is…
Descriptors: Language Tests, English (Second Language), Second Language Learning, Testing Problems
Ventouras, Errikos; Triantis, Dimos; Tsiakas, Panagiotis; Stergiopoulos, Charalampos – Computers & Education, 2010
The aim of the present research was to compare the use of multiple-choice questions (MCQs) as an examination method, to the examination based on constructed-response questions (CRQs). Despite that MCQs have an advantage concerning objectivity in the grading process and speed in production of results, they also introduce an error in the final…
Descriptors: Computer Assisted Instruction, Scoring, Grading, Comparative Analysis
King, Chula G.; Guyette, Roger W., Jr.; Piotrowski, Chris – Journal of Educators Online, 2009
Academic integrity has been a perennial issue in higher education. Undoubtedly, the advent of the Internet and advances in user-friendly technological devices have spurred both concern on the part of faculty and research interest in the academic community regarding inappropriate and unethical behavior on the part of students. This study is…
Descriptors: Cheating, Integrity, Ethics, Business Education

Ricketts, C.; Wilks, S. J. – Assessment & Evaluation in Higher Education, 2002
Compared student performance on computer-based assessment to machine-graded multiple choice tests. Found that performance improved dramatically on the computer-based assessment when students were not required to scroll through the question paper. Concluded that students may be disadvantaged by the introduction of online assessment unless care is…
Descriptors: College Students, Comparative Analysis, Computer Assisted Testing, Higher Education
Kolen, Michael J. – Educational Assessment, 1999
Develops a conceptual framework that addresses score comparability for performance assessments, adaptive tests, paper-and-pencil tests, and alternate item pools for computerized tests. Outlines testing situation aspects that might threaten score comparability and describes procedures for evaluating the degree of score comparability. Suggests ways…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Performance Based Assessment
Lee, Jo Ann; And Others – 1984
The difficulty of test items administered by paper and pencil were compared with the difficulty of the same items administered by computer. The study was conducted to determine if an interaction exists between mode of test administration and ability. An arithmetic reasoning test was constructed for this study. All examinees had taken the Armed…
Descriptors: Adults, Comparative Analysis, Computer Assisted Testing, Difficulty Level
Bringmann, Wolfgang, G.; Christian, James K. – 1979
The practice of not sharing tests results with clients may soon be in conflict with the Ethical Standards for Psychologist (sic). Studies using self-validation of feedback information to study feedback parameters have shown that the form of feedback is less important than the content. To investigate direct feedback of test results by computer, the…
Descriptors: Comparative Analysis, Computer Assisted Testing, Cost Effectiveness, Ethics
Kim, Seock-Ho; Cohen, Allan S. – 1997
Applications of item response theory to practical testing problems including equating, differential item functioning, and computerized adaptive testing, require that item parameter estimates be placed onto a common metric. In this study, two methods for developing a common metric for the graded response model under item response theory were…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Equated Scores
Stocking, Martha L. – 1996
The interest in the application of large-scale computerized adaptive testing has served to focus attention on issues that arise when theoretical advances are made operational. Some of these issues stem less from changes in testing conditions and more from changes in testing paradigms. One such issue is that of the order in which questions are…
Descriptors: Adaptive Testing, Cognitive Processes, Comparative Analysis, Computer Assisted Testing
Schnipke, Deborah L.; Reese, Lynda M. – 1997
Two-stage and multistage test designs provide a way of roughly adapting item difficulty to test-taker ability. All test takers take a parallel stage-one test, and, based on their scores, they are routed to tests of different difficulty levels in subsequent stages. These designs provide some of the benefits of standard computerized adaptive testing…
Descriptors: Ability, Adaptive Testing, Algorithms, Comparative Analysis
Previous Page | Next Page ยป
Pages: 1 | 2