NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 286 to 300 of 567 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gehringer, Edward F.; Peddycord, Barry W., III – Journal of Instructional Research, 2013
As homework and other aspects of education migrate to a computer-based format, on-paper exams are beginning to seem like an anachronism. Online delivery is attractive, but comes with a myriad of implications not apparent at first glance. It affects the kinds of questions that can be asked and complicates administration of the exam, but it may make…
Descriptors: Computer Assisted Testing, Online Surveys, Teacher Attitudes, Student Attitudes
Ferguson, Sarah Jane – Statistics Canada, 2016
Canada's knowledge-based economy--especially the fields of science, technology, engineering and mathematics (STEM)--continues to grow. Related changes in the economy, including shifts to globalized markets and an emphasis on innovation and technology, all mean that education is more and more an integral component of economic and social well-being.…
Descriptors: Foreign Countries, Womens Education, Educational Attainment, Qualifications
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Becker, Kirk A.; Bergstrom, Betty A. – Practical Assessment, Research & Evaluation, 2013
The need for increased exam security, improved test formats, more flexible scheduling, better measurement, and more efficient administrative processes has caused testing agencies to consider converting the administration of their exams from paper-and-pencil to computer-based testing (CBT). Many decisions must be made in order to provide an optimal…
Descriptors: Testing, Models, Testing Programs, Program Administration
Peer reviewed Peer reviewed
Direct linkDirect link
van der Linden, Wim J.; Diao, Qi – Journal of Educational Measurement, 2011
In automated test assembly (ATA), the methodology of mixed-integer programming is used to select test items from an item bank to meet the specifications for a desired test form and optimize its measurement accuracy. The same methodology can be used to automate the formatting of the set of selected items into the actual test form. Three different…
Descriptors: Test Items, Test Format, Test Construction, Item Banks
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kuswoyo, Heri – Advances in Language and Literary Studies, 2013
Among three sections that follow the Paper-Based TOEFL (PBT), many test takers find listening comprehension section is the most difficult. Thus, in this research the researcher aims to explore how students learn PBT's listening comprehension section effectively through song technique. This sounds like a more interesting and engaging way to learn…
Descriptors: Teaching Methods, Test Preparation, English (Second Language), Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Talento-Miller, Eileen; Guo, Fanmin; Han, Kyung T. – International Journal of Testing, 2013
When power tests include a time limit, it is important to assess the possibility of speededness for examinees. Past research on differential speededness has examined gender and ethnic subgroups in the United States on paper and pencil tests. When considering the needs of a global audience, research regarding different native language speakers is…
Descriptors: Adaptive Testing, Computer Assisted Testing, English, Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Aizawa, Kazumi; Iso, Tatsuo – Research-publishing.net, 2013
The present study aims to demonstrate how the estimation of vocabulary size might be affected by two neglected factors in vocabulary size tests. The first factor is randomization of question sequence, as opposed to the traditional high-to-low frequency sequencing. The second factor is learners' confidence in choosing the correct meaning for a…
Descriptors: Vocabulary, Computer Assisted Testing, Scores, Multiple Choice Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Brown, Kevin – CEA Forum, 2015
In this article, the author describes his project to take every standardized exam English majors students take. During the summer and fall semesters of 2012, the author signed up for and took the GRE General Test, the Praxis Content Area Exam (English Language, Literature, and Composition: Content Knowledge), the Senior Major Field Tests in…
Descriptors: College Faculty, College English, Test Preparation, Standardized Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Steinberg, Jonathan; Brenneman, Meghan; Castellano, Karen; Lin, Peng; Miller, Susanne – ETS Research Report Series, 2014
Test providers are increasingly moving toward exclusively administering assessments by computer. Computerized testing is becoming more desirable for test takers because of increased opportunities to test, faster turnaround of individual scores, or perhaps other factors, offering potential benefits for those who may be struggling to pass licensure…
Descriptors: Comparative Analysis, Achievement Gap, Academic Achievement, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Read, John; von Randow, Janet – International Journal of English Studies, 2013
The increasingly diverse language backgrounds of their students are creating new challenges for English-medium universities. One response in Australian and New Zealand institutions has been to introduce post-entry language assessment (PELA) to identify incoming students who need to enhance their academic language ability. One successful example of…
Descriptors: Foreign Countries, Language Tests, Academic Discourse, Diagnostic Tests
Westhuizen, Duan vd – Commonwealth of Learning, 2016
This work starts with a brief overview of education in developing countries, to contextualise the use of the guidelines. Although this document is intended to be a practical tool, it is necessary to include some theoretical analysis of the concept of online assessment. This is given in Sections 3 and 4, together with the identification and…
Descriptors: Guidelines, Student Evaluation, Computer Assisted Testing, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Wan, Lei; Henly, George A. – Applied Measurement in Education, 2012
Many innovative item formats have been proposed over the past decade, but little empirical research has been conducted on their measurement properties. This study examines the reliability, efficiency, and construct validity of two innovative item formats--the figural response (FR) and constructed response (CR) formats used in a K-12 computerized…
Descriptors: Test Items, Test Format, Computer Assisted Testing, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Morrison, Keith – Educational Research and Evaluation, 2013
This paper reviews the literature on comparing online and paper course evaluations in higher education and provides a case study of a very large randomised trial on the topic. It presents a mixed but generally optimistic picture of online course evaluations with respect to response rates, what they indicate, and how to increase them. The paper…
Descriptors: Literature Reviews, Course Evaluation, Case Studies, Higher Education
Peer reviewed Peer reviewed
Direct linkDirect link
Carr, Nathan T.; Xi, Xiaoming – Language Assessment Quarterly, 2010
This article examines how the use of automated scoring procedures for short-answer reading tasks can affect the constructs being assessed. In particular, it highlights ways in which the development of scoring algorithms intended to apply the criteria used by human raters can lead test developers to reexamine and even refine the constructs they…
Descriptors: Scoring, Automation, Reading Tests, Test Format
Ryan, Barbara A. – ProQuest LLC, 2012
Beginning with the No Child Left Behind federal legislation, states were required to use data to monitor and improve student achievement. For high schools, the Missouri Department of Elementary and Secondary Education chose End of Course Exams (EOC) to demonstrate levels of student achievement. The policy changed from school choice of paper-pencil…
Descriptors: Federal Government, Testing, Grade Point Average, Predictor Variables
Pages: 1  |  ...  |  16  |  17  |  18  |  19  |  20  |  21  |  22  |  23  |  24  |  ...  |  38