NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Location
Iran1
Turkey1
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations1
What Works Clearinghouse Rating
Showing all 11 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Olsho, Alexis; Smith, Trevor I.; Eaton, Philip; Zimmerman, Charlotte; Boudreaux, Andrew; White Brahmia, Suzanne – Physical Review Physics Education Research, 2023
We developed the Physics Inventory of Quantitative Literacy (PIQL) to assess students' quantitative reasoning in introductory physics contexts. The PIQL includes several "multiple-choice-multipleresponse" (MCMR) items (i.e., multiple-choice questions for which more than one response may be selected) as well as traditional single-response…
Descriptors: Multiple Choice Tests, Science Tests, Physics, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Ben-Yehudah, Gal; Eshet-Alkalai, Yoram – British Journal of Educational Technology, 2021
The use of digital environments for both learning and assessment is becoming prevalent. This often leads to incongruent situations, in which the study medium (eg, printed textbook) is different from the testing medium (eg, online multiple-choice exams). Despite some evidence that incongruent study-test situations are associated with inferior…
Descriptors: Reading Comprehension, Reading Tests, Test Format, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yarahmadzehi, Nahid; Goodarzi, Mostafa – Turkish Online Journal of Distance Education, 2020
Throughout this study technology and especially mobile phones was utilized in EFL classrooms in order to see whether it can influence the process of vocabulary formative assessment and consequently improve vocabulary learning of Iranian pre-intermediate EFL learners or not. Two groups of pre-intermediate EFL learners participated in this study.…
Descriptors: Formative Evaluation, Computer Assisted Testing, Vocabulary Development, English (Second Language)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ghaderi, Marzieh; Mogholi, Marzieh; Soori, Afshin – International Journal of Education and Literacy Studies, 2014
Testing subject has many subsets and connections. One important issue is how to assess or measure students or learners. What would be our tools, what would be our style, what would be our goal and so on. So in this paper the author attended to the style of testing in school and other educational settings. Since the purposes of educational system…
Descriptors: Testing, Testing Programs, Intermode Differences, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Becker, Kirk A.; Bergstrom, Betty A. – Practical Assessment, Research & Evaluation, 2013
The need for increased exam security, improved test formats, more flexible scheduling, better measurement, and more efficient administrative processes has caused testing agencies to consider converting the administration of their exams from paper-and-pencil to computer-based testing (CBT). Many decisions must be made in order to provide an optimal…
Descriptors: Testing, Models, Testing Programs, Program Administration
Peer reviewed Peer reviewed
Direct linkDirect link
Hardre, Patricia L.; Crowson, H. Michael; Xie, Kui – Journal of Educational Computing Research, 2010
Questionnaire instruments are routinely translated to digital administration systems; however, few studies have compared the differential effects of these administrative methods, and fewer yet in authentic contexts-of-use. In this study, 326 university students were randomly assigned to one of two administration conditions, paper-based (PBA) or…
Descriptors: Internet, Computer Assisted Testing, Questionnaires, College Students
Bahar, Mehmet; Aydin, Fatih; Karakirik, Erol – Online Submission, 2009
In this article, Structural communication grid (SCG), an alternative measurement and evaluation technique, has been firstly summarised and the design, development and implementation of a computer based SCG system have been introduced. The system is then tested on a sample of 154 participants consisting of candidate students, science teachers and…
Descriptors: Educational Technology, Technology Integration, Evaluation Methods, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Macedo-Rouet, Monica; Ney, Muriel; Charles, Sandrine; Lallich-Boidin, Genevieve – Computers & Education, 2009
The use of computers to deliver course-related materials is rapidly expanding in most universities. Yet the effects of computer vs. printed delivery modes on students' performance and motivation are not yet fully known. We compared the impacts of Web vs. paper to deliver practice quizzes that require information search in lecture notes. Hundred…
Descriptors: Undergraduate Students, Notetaking, Tests, Lecture Method
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gu, Lixiong; Drake, Samuel; Wolfe, Edward W. – Journal of Technology, Learning, and Assessment, 2006
This study seeks to determine whether item features are related to observed differences in item difficulty (DIF) between computer- and paper-based test delivery media. Examinees responded to 60 quantitative items similar to those found on the GRE general test in either a computer-based or paper-based medium. Thirty-eight percent of the items were…
Descriptors: Test Bias, Test Items, Educational Testing, Student Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Johnson, Martin; Green, Sylvia – Journal of Technology, Learning, and Assessment, 2006
The transition from paper-based to computer-based assessment raises a number of important issues about how mode might affect children's performance and question answering strategies. In this project 104 eleven-year-olds were given two sets of matched mathematics questions, one set on-line and the other on paper. Facility values were analyzed to…
Descriptors: Student Attitudes, Computer Assisted Testing, Program Effectiveness, Elementary School Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Horkay, Nancy; Bennett, Randy Elliott; Allen, Nancy; Kaplan, Bruce; Yan, Fred – Journal of Technology, Learning, and Assessment, 2006
This study investigated the comparability of scores for paper and computer versions of a writing test administered to eighth grade students. Two essay prompts were given on paper to a nationally representative sample as part of the 2002 main NAEP writing assessment. The same two essay prompts were subsequently administered on computer to a second…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Program Effectiveness