Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 7 |
Since 2006 (last 20 years) | 18 |
Descriptor
Source
Author
Bodenhorn, Nancy | 2 |
Ng, Kok-Mun | 2 |
Sinharay, Sandip | 2 |
Attali, Yigal | 1 |
Bejar, Isaac | 1 |
Berberoglu, Giray | 1 |
Bulent, Basaran | 1 |
Chen, Yan-Min | 1 |
Cho, YoungWoo | 1 |
Cigdem, Harun | 1 |
Greiff, Samuel | 1 |
More ▼ |
Publication Type
Journal Articles | 15 |
Reports - Research | 15 |
Tests/Questionnaires | 5 |
Collected Works - Proceedings | 1 |
Information Analyses | 1 |
Numerical/Quantitative Data | 1 |
Reports - Evaluative | 1 |
Education Level
Audience
Practitioners | 1 |
Researchers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 3 |
Program for International… | 2 |
Test of English as a Foreign… | 2 |
ACT Assessment | 1 |
Raven Advanced Progressive… | 1 |
What Works Clearinghouse Rating
Steedle, Jeffrey; Pashley, Peter; Cho, YoungWoo – ACT, Inc., 2020
Three mode comparability studies were conducted on the following Saturday national ACT test dates: October 26, 2019, December 14, 2019, and February 8, 2020. The primary goal of these studies was to evaluate whether ACT scores exhibited mode effects between paper and online testing that would necessitate statistical adjustments to the online…
Descriptors: Test Format, Computer Assisted Testing, College Entrance Examinations, Scores
Robin, Frédéric; Bejar, Isaac; Liang, Longjuan; Rijmen, Frank – ETS Research Report Series, 2016
Exploratory and confirmatory factor analyses of domestic data from the" GRE"® revised General Test, introduced in 2011, were conducted separately for the verbal (VBL) and quantitative (QNT) reasoning measures to evaluate the unidimensionality and local independence assumptions required by item response theory (IRT). Results based on data…
Descriptors: College Entrance Examinations, Graduate Study, Verbal Tests, Mathematics Tests
Jansen, Renée S.; van Leeuwen, Anouschka; Janssen, Jeroen; Kester, Liesbeth; Kalz, Marco – Journal of Computing in Higher Education, 2017
The number of students engaged in Massive Open Online Courses (MOOCs) is increasing rapidly. Due to the autonomy of students in this type of education, students in MOOCs are required to regulate their learning to a greater extent than students in traditional, face-to-face education. However, there is no questionnaire available suited for this…
Descriptors: Online Courses, Independent Study, Questionnaires, Likert Scales
Kalender, Ilker; Berberoglu, Giray – Educational Sciences: Theory and Practice, 2017
Admission into university in Turkey is very competitive and features a number of practical problems regarding not only the test administration process itself, but also concerning the psychometric properties of test scores. Computerized adaptive testing (CAT) is seen as a possible alternative approach to solve these problems. In the first phase of…
Descriptors: Foreign Countries, Computer Assisted Testing, College Admission, Simulation
Bulent, Basaran; Murat, Yalman; Selahattin, Gonen – Educational Research and Reviews, 2016
Today, the spread of Internet use has accelerated the development of educational technologies and increased the quality of education by encouraging teachers' cooperation and participation. As a result, examinations executed via the Internet have become common, and a number of universities have started using distant education management system.…
Descriptors: Attitude Measures, Computer Assisted Testing, Test Reliability, Test Validity
Sangki Kim – English Teaching, 2017
Intelligibility of second language (L2) English has become an important goal in English pronunciation teaching. However, intelligibility research primarily focused on L2 English users and L2 production features; only a handful of studies have examined other effects on the intelligibility of L2 English. In line with the three-part model of…
Descriptors: Foreign Countries, Language Variation, English (Second Language), Second Language Learning
Kim, Jin-Young – Educational Technology & Society, 2015
This study explores and describes different viewpoints on Computer Based Assessment (CBA) by using Q methodology to identify perspectives of students and instructors and classify these into perceptional typologies. Thirty undergraduate students taking CBA courses and fifteen instructors adopting CBA into their curriculum at a university in Korea,…
Descriptors: Computer Assisted Testing, Classification, Q Methodology, Undergraduate Students
Zou, Xiao-Ling; Chen, Yan-Min – Technology, Pedagogy and Education, 2016
The effects of computer and paper test media on EFL test-takers with different computer familiarity in writing scores and in the cognitive writing process have been comprehensively explored from the learners' aspect as well as on the basis of related theories and practice. The results indicate significant differences in test scores among the…
Descriptors: English (Second Language), Second Language Learning, Second Language Instruction, Test Format
Yaratan, Huseyin; Suphi, Nilgun – Turkish Online Journal of Educational Technology - TOJET, 2013
Questionnaires administered manually can cause surreptitious peer pressure on the candidate to finish when 'the others" have completed theirs, forcing students to rush or skip individual items or may hinder the ability of
noticing participants who may be having difficulty understanding certain items. These drawbacks can have serious…
Descriptors: Synchronous Communication, Questionnaires, Computer Assisted Testing, Undergraduate Students
Attali, Yigal; Sinharay, Sandip – ETS Research Report Series, 2015
The "e-rater"® automated essay scoring system is used operationally in the scoring of the argument and issue tasks that form the Analytical Writing measure of the "GRE"® General Test. For each of these tasks, this study explored the value added of reporting 4 trait scores for each of these 2 tasks over the total e-rater score.…
Descriptors: Scores, Computer Assisted Testing, Computer Software, Grammar
Greiff, Samuel; Kretzschmar, André; Müller, Jonas C.; Spinath, Birgit; Martin, Romain – Journal of Educational Psychology, 2014
The 21st-century work environment places strong emphasis on nonroutine transversal skills. In an educational context, complex problem solving (CPS) is generally considered an important transversal skill that includes knowledge acquisition and its application in new and interactive situations. The dynamic and interactive nature of CPS requires a…
Descriptors: Computer Assisted Testing, Problem Solving, Difficulty Level, Information Technology
Morrison, Keith – Educational Research and Evaluation, 2013
This paper reviews the literature on comparing online and paper course evaluations in higher education and provides a case study of a very large randomised trial on the topic. It presents a mixed but generally optimistic picture of online course evaluations with respect to response rates, what they indicate, and how to increase them. The paper…
Descriptors: Literature Reviews, Course Evaluation, Case Studies, Higher Education
Cigdem, Harun; Oncu, Semiral – EURASIA Journal of Mathematics, Science & Technology Education, 2015
This survey study examines an assessment methodology through e-quizzes administered at a military vocational college and subsequent student perceptions in spring 2013 at the "Computer Networks" course. A total of 30 Computer Technologies and 261 Electronic and Communication Technologies students took three e-quizzes. Data were gathered…
Descriptors: Foreign Countries, Military Schools, Military Training, Vocational Education
Sawaki, Yasuyo; Sinharay, Sandip – ETS Research Report Series, 2013
This study investigates the value of reporting the reading, listening, speaking, and writing section scores for the "TOEFL iBT"® test, focusing on 4 related aspects of the psychometric quality of the TOEFL iBT section scores: reliability of the section scores, dimensionality of the test, presence of distinct score profiles, and the…
Descriptors: Scores, Computer Assisted Testing, Factor Analysis, Correlation
Quinlan, Thomas; Higgins, Derrick; Wolff, Susanne – Educational Testing Service, 2009
This report evaluates the construct coverage of the e-rater[R[ scoring engine. The matter of construct coverage depends on whether one defines writing skill, in terms of process or product. Originally, the e-rater engine consisted of a large set of components with a proven ability to predict human holistic scores. By organizing these capabilities…
Descriptors: Guides, Writing Skills, Factor Analysis, Writing Tests
Previous Page | Next Page »
Pages: 1 | 2