NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 6,376 to 6,390 of 7,091 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kormos, Judit; Denes, Mariann – System: An International Journal of Educational Technology and Applied Linguistics, 2004
The research reported in this paper explores which variables predict native and non-native speaking teachers' perception of fluency and distinguish fluent from non-fluent L2 learners. In addition to traditional measures of the quality of students' output such as accuracy and lexical diversity, we investigated speech samples collected from 16…
Descriptors: Computer Assisted Testing, Native Speakers, Language Fluency, Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Lim, Kenneth Y. T. – International Research in Geographical and Environmental Education, 2005
This paper describes part of the results of a study investigating how adolescents, between the ages of 14 and 15, construct and share meaning about their local environments. Specifically, the results presented focus on how adolescents perceive and interpret spatial and three-dimensional data presented in various formats, such as in terms of…
Descriptors: Intelligence, Test Results, Instructional Effectiveness, Civics
Peer reviewed Peer reviewed
Direct linkDirect link
Miller, Tristan – Journal of Educational Computing Research, 2003
Latent semantic analysis (LSA) is an automated, statistical technique for comparing the semantic similarity of words or documents. In this article, I examine the application of LSA to automated essay scoring. I compare LSA methods to earlier statistical methods for assessing essay quality, and critically review contemporary essay-scoring systems…
Descriptors: Semantics, Test Scoring Machines, Essays, Semantic Differential
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gu, Lixiong; Drake, Samuel; Wolfe, Edward W. – Journal of Technology, Learning, and Assessment, 2006
This study seeks to determine whether item features are related to observed differences in item difficulty (DIF) between computer- and paper-based test delivery media. Examinees responded to 60 quantitative items similar to those found on the GRE general test in either a computer-based or paper-based medium. Thirty-eight percent of the items were…
Descriptors: Test Bias, Test Items, Educational Testing, Student Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dikli, Semire – Journal of Technology, Learning, and Assessment, 2006
Automated Essay Scoring (AES) is defined as the computer technology that evaluates and scores the written prose (Shermis & Barrera, 2002; Shermis & Burstein, 2003; Shermis, Raymat, & Barrera, 2003). AES systems are mainly used to overcome time, cost, reliability, and generalizability issues in writing assessment (Bereiter, 2003; Burstein,…
Descriptors: Scoring, Writing Evaluation, Writing Tests, Standardized Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Chih-Ming; Hong, Chin-Ming; Chen, Shyuan-Yi; Liu, Chao-Yu – Educational Technology & Society, 2006
Learning performance assessment aims to evaluate what knowledge learners have acquired from teaching activities. Objective technical measures of learning performance are difficult to develop, but are extremely important for both teachers and learners. Learning performance assessment using learning portfolios or web server log data is becoming an…
Descriptors: Summative Evaluation, Student Evaluation, Formative Evaluation, Performance Based Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Roever, Carsten – Language Testing, 2006
Despite increasing interest in interlanguage pragmatics research, research on assessment of this crucial area of second language competence still lags behind assessment of other aspects of learners' developing second language (L2) competence. This study describes the development and validation of a 36-item web-based test of ESL pragmalinguistics,…
Descriptors: Familiarity, Test Validity, Speech Acts, Interlanguage
Peer reviewed Peer reviewed
Direct linkDirect link
Petrova, Raina; Tibrewal, Abhilasha; Sobh, Tarek M. – Journal of STEM Education: Innovations and Research, 2006
In keeping with the outcome-based assessment outlined by ABET's Education Criteria 2000, the School of Engineering at the University of Bridgeport has defined fifteen general student outcomes for its computer engineering program. These outcomes form the basis of its instructional program and assessment activities. In assessing and monitoring the…
Descriptors: Engineering Education, STEM Education, Computer Science Education, Goal Orientation
Peer reviewed Peer reviewed
Direct linkDirect link
Clariana, Roy B.; Koul, Ravinder; Salehi, Roya – International Journal of Instructional Media, 2006
This investigation seeks to confirm a computer-based approach that can be used to score concept maps (Poindexter & Clariana, 2004) and then describes the concurrent criterion-related validity of these scores. Participants enrolled in two graduate courses (n=24) were asked to read about and research online the structure and function of the heart…
Descriptors: Semantics, Human Body, Test Validity, Anatomy
Peer reviewed Peer reviewed
Direct linkDirect link
DiLillo, David; Fortier, Michelle A.; Hayes, Sarah A.; Trask, Emily; Perry, Andrea R.; Messman-Moore, Terri; Fauchier, Angele; Nash, Cindy – Assessment, 2006
This study compared retrospective reports of childhood sexual and physical abuse as assessed by two measures: the Childhood Trauma Questionnaire (CTQ), which uses a Likert-type scaling approach, and the Computer Assisted Maltreatment Inventory (CAMI), which employs a behaviorally specific means of assessment. Participants included 1,195…
Descriptors: Undergraduate Students, Factor Analysis, Victims of Crime, Behavior
Peer reviewed Peer reviewed
Direct linkDirect link
Tudor, Gail E. – Journal of Statistics Education, 2006
This paper describes the components of a successful, online, introductory statistics course and shares students' comments and evaluations of each component. Past studies have shown that quality interaction with the professor is lacking in many online courses. While students want a course that is well organized and easy to follow, they also want to…
Descriptors: Online Courses, Interaction, Statistics, Teaching Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Burstein, Jill – Journal of Technology, Learning, and Assessment, 2006
E-rater[R] has been used by the Educational Testing Service for automated essay scoring since 1999. This paper describes a new version of e-rater (V.2) that is different from other automated essay scoring systems in several important respects. The main innovations of e-rater V.2 are a small, intuitive, and meaningful set of features used for…
Descriptors: Educational Testing, Test Scoring Machines, Scoring, Writing Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Jiang – Assessing Writing, 2006
The present study investigated the influence of word processing on the writing of students of English as a second language (ESL) and on writing assessment as well. Twenty-one adult Mandarin-Chinese speakers with advanced English proficiency living in Toronto participated in the study. Each participant wrote two comparable writing tasks under…
Descriptors: Writing Evaluation, Protocol Analysis, Writing Tests, Evaluation Methods
Schaeffer, Gary A.; And Others – 1993
This report contains results of a field test conducted to determine the relationship between a Graduate Records Examination (GRE) linear computer-based test (CBT) and a paper-and-pencil (P&P) test with the same items. Recent GRE examinees participated in the field test by taking either a CBT or the P&P test. Data from the field test…
Descriptors: Attitudes, College Graduates, Computer Assisted Testing, Equated Scores
Allen, Nancy L.; Donoghue, John R. – 1995
This Monte Carlo study examined the effect of complex sampling of items on the measurement of differential item functioning (DIF) using the Mantel-Haenszel procedure. Data were generated using a three-parameter logistic item response theory model according to the balanced incomplete block (BIB) design used in the National Assessment of Educational…
Descriptors: Computer Assisted Testing, Difficulty Level, Elementary Secondary Education, Identification
Pages: 1  |  ...  |  422  |  423  |  424  |  425  |  426  |  427  |  428  |  429  |  430  |  ...  |  473