NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 6,376 to 6,390 of 7,091 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Miller, Tristan – Journal of Educational Computing Research, 2003
Latent semantic analysis (LSA) is an automated, statistical technique for comparing the semantic similarity of words or documents. In this article, I examine the application of LSA to automated essay scoring. I compare LSA methods to earlier statistical methods for assessing essay quality, and critically review contemporary essay-scoring systems…
Descriptors: Semantics, Test Scoring Machines, Essays, Semantic Differential
Peer reviewed Peer reviewed
Direct linkDirect link
Clariana, Roy B.; Wallace, Patricia – Journal of Educational Computing Research, 2007
This proof-of-concept investigation describes a computer-based approach for deriving the knowledge structure of individuals and of groups from their written essays, and considers the convergent criterion-related validity of the computer-based scores relative to human rater essay scores and multiple-choice test scores. After completing a…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Construct Validity, Cognitive Structures
Peer reviewed Peer reviewed
Direct linkDirect link
Peddecord, K. Michael; Holsclaw, Patricia; Jacobson, Isabel Gomez; Kwizera, Lisa; Rose, Kelly; Gersberg, Richard; Macias-Reynolds, Violet – Journal of Continuing Education in the Health Professions, 2007
Introduction: Few studies have rigorously evaluated the effectiveness of health-related continuing education using satellite distribution. This study assessed participants' professional characteristics and their changes in knowledge, attitudes, and actions taken after viewing a public health preparedness training course on mass vaccination…
Descriptors: Program Effectiveness, Measures (Individuals), Programming (Broadcast), Internet
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gu, Lixiong; Drake, Samuel; Wolfe, Edward W. – Journal of Technology, Learning, and Assessment, 2006
This study seeks to determine whether item features are related to observed differences in item difficulty (DIF) between computer- and paper-based test delivery media. Examinees responded to 60 quantitative items similar to those found on the GRE general test in either a computer-based or paper-based medium. Thirty-eight percent of the items were…
Descriptors: Test Bias, Test Items, Educational Testing, Student Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dikli, Semire – Journal of Technology, Learning, and Assessment, 2006
Automated Essay Scoring (AES) is defined as the computer technology that evaluates and scores the written prose (Shermis & Barrera, 2002; Shermis & Burstein, 2003; Shermis, Raymat, & Barrera, 2003). AES systems are mainly used to overcome time, cost, reliability, and generalizability issues in writing assessment (Bereiter, 2003; Burstein,…
Descriptors: Scoring, Writing Evaluation, Writing Tests, Standardized Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Chih-Ming; Hong, Chin-Ming; Chen, Shyuan-Yi; Liu, Chao-Yu – Educational Technology & Society, 2006
Learning performance assessment aims to evaluate what knowledge learners have acquired from teaching activities. Objective technical measures of learning performance are difficult to develop, but are extremely important for both teachers and learners. Learning performance assessment using learning portfolios or web server log data is becoming an…
Descriptors: Summative Evaluation, Student Evaluation, Formative Evaluation, Performance Based Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Roever, Carsten – Language Testing, 2006
Despite increasing interest in interlanguage pragmatics research, research on assessment of this crucial area of second language competence still lags behind assessment of other aspects of learners' developing second language (L2) competence. This study describes the development and validation of a 36-item web-based test of ESL pragmalinguistics,…
Descriptors: Familiarity, Test Validity, Speech Acts, Interlanguage
Schaeffer, Gary A.; And Others – 1993
This report contains results of a field test conducted to determine the relationship between a Graduate Records Examination (GRE) linear computer-based test (CBT) and a paper-and-pencil (P&P) test with the same items. Recent GRE examinees participated in the field test by taking either a CBT or the P&P test. Data from the field test…
Descriptors: Attitudes, College Graduates, Computer Assisted Testing, Equated Scores
Allen, Nancy L.; Donoghue, John R. – 1995
This Monte Carlo study examined the effect of complex sampling of items on the measurement of differential item functioning (DIF) using the Mantel-Haenszel procedure. Data were generated using a three-parameter logistic item response theory model according to the balanced incomplete block (BIB) design used in the National Assessment of Educational…
Descriptors: Computer Assisted Testing, Difficulty Level, Elementary Secondary Education, Identification
Flores, Kathryn Younger – 1995
This paper presents preliminary, but statistically significant, findings from a study that compares two methods of measuring instructional effectiveness: global evaluation by experts and systematic observation using the SCRIBE Ob.2 software developed at the University of Texas at Austin. Hierarchical instruction of a performance skill…
Descriptors: Classroom Observation Techniques, College Faculty, Comparative Analysis, Computer Assisted Testing
PDF pending restoration PDF pending restoration
Fiorello, Catherine A. – 1996
Research has produced mixed results in evaluating whether people are influenced more by computer output than by other information. The present study examined how teachers' perceptions of psychological reports differ when the reports are identified as being produced by a computer program or by a school psychologist. Subjects were 40 experienced…
Descriptors: Computer Assisted Testing, Computer Uses in Education, Credibility, Diagnostic Tests
Treadway, Ray – 1997
The integration of test-banks for computer-based testing, textbooks, and electronic lecture notes in first-year mathematics courses has changed the way mathematics is taught at Bennett College (Greensboro, North Carolina). Classes meet in two electronic classrooms each with 27 computers on a local area network and a projection system. An…
Descriptors: College Curriculum, College Mathematics, Computer Assisted Testing, Computer System Design
PDF pending restoration PDF pending restoration
Bakken, Carol H. – 1996
Thousands of older incarcerated youth (17-18 years old) pass through the juvenile justice system every year. Many are not viable candidates for traditional high school graduation because of limited earned credits and large gaps in education. The purpose of this study was to establish an effective and efficient means for determining students'…
Descriptors: Adaptive Testing, Computer Assisted Testing, Correlation, Equivalency Tests
Roos, Linda L.; And Others – 1992
Computerized adaptive (CA) testing uses an algorithm to match examinee ability to item difficulty, while self-adapted (SA) testing allows the examinee to choose the difficulty of his or her items. Research comparing SA and CA testing has shown that examinees experience lower anxiety and improved performance with SA testing. All previous research…
Descriptors: Ability Identification, Adaptive Testing, Algebra, Algorithms
Smith, Nancy J.; And Others – 1993
The Grammar, Spelling, and Punctuation (GSP) test is administered to students in the College of Communication at the University of Texas, Austin, as a means of determining eligibility to register for certain courses in journalism, broadcasting, and advertising. The test was administered in a paper-and-pencil version to 16 students and in a…
Descriptors: Adaptive Testing, College Students, Computer Assisted Testing, Eligibility
Pages: 1  |  ...  |  422  |  423  |  424  |  425  |  426  |  427  |  428  |  429  |  430  |  ...  |  473