NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 6,391 to 6,405 of 7,112 results Save | Export
Peer reviewed Peer reviewed
Niemeyer, Chris – RSR: Reference Services Review, 1999
Describes the development and administration of a computerized test at the Iowa State University Library for a mandatory library skills class. Discusses the use of Authorware; test appearance and environment; test administration; compiling test scores; results of students' evaluations; Web courseware; and future of the computerized test. (LRW)
Descriptors: Academic Libraries, Computer Assisted Testing, Courseware, Futures (of Society)
Peer reviewed Peer reviewed
Direct linkDirect link
Clark, Linda – Educational Leadership, 2005
When an Idaho school district initiated computerized adaptive testing, it discovered that all the growth occurring in the district was limited to the lowest-achieving students. The more proficient students, which included both gifted and above-average learners, showed little or no growth. The assessment data showed a weak curriculum that tended to…
Descriptors: Curriculum Development, Low Achievement, Academically Gifted, Grouping (Instructional Purposes)
Peer reviewed Peer reviewed
Direct linkDirect link
Heap, Nick W.; Kear, Karen L.; Bissell, Chris C. – European Journal of Engineering Education, 2004
A well-designed assessment strategy can motivate students, and help teachers and institutions to support deep learning. In contrast, inappropriate forms of assessment may promote surface learning, and will therefore fail to support the true goals of education. Recent theories of learning stress the value of dialogue, negotiation and feedback.…
Descriptors: Communities of Practice, Feedback (Response), Learning Theories, Engineering Education
Peer reviewed Peer reviewed
Direct linkDirect link
Goddard, H. Wallace; Dennis, Steven A. – Journal of Family and Consumer Sciences, 2004
The authors of this article discuss customizing parent education which requires customized assessment. At Auburn University, Kreg Edgmon and Wally Goddard developed a parent assessment based on the National Extension Parent Education Model (NEPEM) (Smith, Cudaback, Goddard, & Myers-Walls, 1994). All items in the parent assessment were tested with…
Descriptors: Parent Education, Child Rearing, Self Evaluation (Individuals), Information Technology
Peer reviewed Peer reviewed
Direct linkDirect link
Kormos, Judit; Denes, Mariann – System: An International Journal of Educational Technology and Applied Linguistics, 2004
The research reported in this paper explores which variables predict native and non-native speaking teachers' perception of fluency and distinguish fluent from non-fluent L2 learners. In addition to traditional measures of the quality of students' output such as accuracy and lexical diversity, we investigated speech samples collected from 16…
Descriptors: Computer Assisted Testing, Native Speakers, Language Fluency, Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Lim, Kenneth Y. T. – International Research in Geographical and Environmental Education, 2005
This paper describes part of the results of a study investigating how adolescents, between the ages of 14 and 15, construct and share meaning about their local environments. Specifically, the results presented focus on how adolescents perceive and interpret spatial and three-dimensional data presented in various formats, such as in terms of…
Descriptors: Intelligence, Test Results, Instructional Effectiveness, Civics
Peer reviewed Peer reviewed
Direct linkDirect link
Miller, Tristan – Journal of Educational Computing Research, 2003
Latent semantic analysis (LSA) is an automated, statistical technique for comparing the semantic similarity of words or documents. In this article, I examine the application of LSA to automated essay scoring. I compare LSA methods to earlier statistical methods for assessing essay quality, and critically review contemporary essay-scoring systems…
Descriptors: Semantics, Test Scoring Machines, Essays, Semantic Differential
Peer reviewed Peer reviewed
Direct linkDirect link
Clariana, Roy B.; Wallace, Patricia – Journal of Educational Computing Research, 2007
This proof-of-concept investigation describes a computer-based approach for deriving the knowledge structure of individuals and of groups from their written essays, and considers the convergent criterion-related validity of the computer-based scores relative to human rater essay scores and multiple-choice test scores. After completing a…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Construct Validity, Cognitive Structures
Peer reviewed Peer reviewed
Direct linkDirect link
Peddecord, K. Michael; Holsclaw, Patricia; Jacobson, Isabel Gomez; Kwizera, Lisa; Rose, Kelly; Gersberg, Richard; Macias-Reynolds, Violet – Journal of Continuing Education in the Health Professions, 2007
Introduction: Few studies have rigorously evaluated the effectiveness of health-related continuing education using satellite distribution. This study assessed participants' professional characteristics and their changes in knowledge, attitudes, and actions taken after viewing a public health preparedness training course on mass vaccination…
Descriptors: Program Effectiveness, Measures (Individuals), Programming (Broadcast), Internet
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gu, Lixiong; Drake, Samuel; Wolfe, Edward W. – Journal of Technology, Learning, and Assessment, 2006
This study seeks to determine whether item features are related to observed differences in item difficulty (DIF) between computer- and paper-based test delivery media. Examinees responded to 60 quantitative items similar to those found on the GRE general test in either a computer-based or paper-based medium. Thirty-eight percent of the items were…
Descriptors: Test Bias, Test Items, Educational Testing, Student Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dikli, Semire – Journal of Technology, Learning, and Assessment, 2006
Automated Essay Scoring (AES) is defined as the computer technology that evaluates and scores the written prose (Shermis & Barrera, 2002; Shermis & Burstein, 2003; Shermis, Raymat, & Barrera, 2003). AES systems are mainly used to overcome time, cost, reliability, and generalizability issues in writing assessment (Bereiter, 2003; Burstein,…
Descriptors: Scoring, Writing Evaluation, Writing Tests, Standardized Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Chih-Ming; Hong, Chin-Ming; Chen, Shyuan-Yi; Liu, Chao-Yu – Educational Technology & Society, 2006
Learning performance assessment aims to evaluate what knowledge learners have acquired from teaching activities. Objective technical measures of learning performance are difficult to develop, but are extremely important for both teachers and learners. Learning performance assessment using learning portfolios or web server log data is becoming an…
Descriptors: Summative Evaluation, Student Evaluation, Formative Evaluation, Performance Based Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Roever, Carsten – Language Testing, 2006
Despite increasing interest in interlanguage pragmatics research, research on assessment of this crucial area of second language competence still lags behind assessment of other aspects of learners' developing second language (L2) competence. This study describes the development and validation of a 36-item web-based test of ESL pragmalinguistics,…
Descriptors: Familiarity, Test Validity, Speech Acts, Interlanguage
Schaeffer, Gary A.; And Others – 1993
This report contains results of a field test conducted to determine the relationship between a Graduate Records Examination (GRE) linear computer-based test (CBT) and a paper-and-pencil (P&P) test with the same items. Recent GRE examinees participated in the field test by taking either a CBT or the P&P test. Data from the field test…
Descriptors: Attitudes, College Graduates, Computer Assisted Testing, Equated Scores
Allen, Nancy L.; Donoghue, John R. – 1995
This Monte Carlo study examined the effect of complex sampling of items on the measurement of differential item functioning (DIF) using the Mantel-Haenszel procedure. Data were generated using a three-parameter logistic item response theory model according to the balanced incomplete block (BIB) design used in the National Assessment of Educational…
Descriptors: Computer Assisted Testing, Difficulty Level, Elementary Secondary Education, Identification
Pages: 1  |  ...  |  423  |  424  |  425  |  426  |  427  |  428  |  429  |  430  |  431  |  ...  |  475