NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 6,271 to 6,285 of 7,091 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Peddecord, K. Michael; Holsclaw, Patricia; Jacobson, Isabel Gomez; Kwizera, Lisa; Rose, Kelly; Gersberg, Richard; Macias-Reynolds, Violet – Journal of Continuing Education in the Health Professions, 2007
Introduction: Few studies have rigorously evaluated the effectiveness of health-related continuing education using satellite distribution. This study assessed participants' professional characteristics and their changes in knowledge, attitudes, and actions taken after viewing a public health preparedness training course on mass vaccination…
Descriptors: Program Effectiveness, Measures (Individuals), Programming (Broadcast), Internet
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gu, Lixiong; Drake, Samuel; Wolfe, Edward W. – Journal of Technology, Learning, and Assessment, 2006
This study seeks to determine whether item features are related to observed differences in item difficulty (DIF) between computer- and paper-based test delivery media. Examinees responded to 60 quantitative items similar to those found on the GRE general test in either a computer-based or paper-based medium. Thirty-eight percent of the items were…
Descriptors: Test Bias, Test Items, Educational Testing, Student Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dikli, Semire – Journal of Technology, Learning, and Assessment, 2006
Automated Essay Scoring (AES) is defined as the computer technology that evaluates and scores the written prose (Shermis & Barrera, 2002; Shermis & Burstein, 2003; Shermis, Raymat, & Barrera, 2003). AES systems are mainly used to overcome time, cost, reliability, and generalizability issues in writing assessment (Bereiter, 2003; Burstein,…
Descriptors: Scoring, Writing Evaluation, Writing Tests, Standardized Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Chih-Ming; Hong, Chin-Ming; Chen, Shyuan-Yi; Liu, Chao-Yu – Educational Technology & Society, 2006
Learning performance assessment aims to evaluate what knowledge learners have acquired from teaching activities. Objective technical measures of learning performance are difficult to develop, but are extremely important for both teachers and learners. Learning performance assessment using learning portfolios or web server log data is becoming an…
Descriptors: Summative Evaluation, Student Evaluation, Formative Evaluation, Performance Based Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Roever, Carsten – Language Testing, 2006
Despite increasing interest in interlanguage pragmatics research, research on assessment of this crucial area of second language competence still lags behind assessment of other aspects of learners' developing second language (L2) competence. This study describes the development and validation of a 36-item web-based test of ESL pragmalinguistics,…
Descriptors: Familiarity, Test Validity, Speech Acts, Interlanguage
Wainer, Howard; And Others – 1991
When an examination consists, in whole or in part, of constructed response items, it is a common practice to allow the examinee to choose among a variety of questions. This procedure is usually adopted so that the limited number of items that can be completed in the allotted time does not unfairly affect the examinee. This results in the de facto…
Descriptors: Adaptive Testing, Chemistry, Comparative Analysis, Computer Assisted Testing
Gibbs, William J.; Lario-Gibbs, Annette M. – 1995
This paper discusses a computer-based prototype called TestMaker that enables educators to create computer-based tests. Given the functional needs of faculty, the host of research implications computer technology has for assessment, and current educational perspectives such as constructivism and their impact on testing, the purposes for developing…
Descriptors: College Faculty, Computer Assisted Testing, Computer Software, Computer Uses in Education
Bergstrom, Betty; And Others – 1994
Examinee response times from a computerized adaptive test taken by 204 examinees taking a certification examination were analyzed using a hierarchical linear model. Two equations were posed: a within-person model and a between-person model. Variance within persons was eight times greater than variance between persons. Several variables…
Descriptors: Adaptive Testing, Adults, Certification, Computer Assisted Testing
Schaeffer, Gary A.; And Others – 1995
This report summarizes the results from two studies. The first assessed the comparability of scores derived from linear computer-based (CBT) and computer adaptive (CAT) versions of the three Graduate Record Examinations (GRE) General Test measures. A verbal CAT was taken by 1,507, a quantitative CAT by 1,354, and an analytical CAT by 995…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Equated Scores
Burstein, Jill C.; Kaplan, Randy M. – 1995
There is a considerable interest at Educational Testing Service (ETS) to include performance-based, natural language constructed-response items on standardized tests. Such items can be developed, but the projected time and costs required to have these items scored by human graders would be prohibitive. In order for ETS to include these types of…
Descriptors: Computer Assisted Testing, Constructed Response, Cost Effectiveness, Hypothesis Testing
Taniar, David; Rahayu, Wenny – 1996
Most use of multimedia technology in teaching and learning to date has emphasized the teaching aspect only. An application of multimedia in examinations has been neglected. This paper addresses how multimedia technology can be applied to the automatization of assessment, by proposing a prototype of a multimedia question bank, which is able to…
Descriptors: Audiotape Recordings, Computer Assisted Testing, Computer Graphics, Elementary Secondary Education
Schnipke, Deborah L.; Pashley, Peter J. – 1997
Differences in test performance on time-limited tests may be due in part to differential response-time rates between subgroups, rather than real differences in the knowledge, skills, or developed abilities of interest. With computer-administered tests, response times are available and may be used to address this issue. This study investigates…
Descriptors: Computer Assisted Testing, Data Analysis, English, High Stakes Tests
Matthews, Don – 1992
At Humber College (HC) in Toronto, Ontario, Canada, the Digital Electronics (DE) program utilizes a computerized learning infrastructure called Computer Managed Learning (CML). The program, which has been under development for several years, is flexible enough to build a unique program of studies for each individual student and allows for the…
Descriptors: Community Colleges, Computer Assisted Testing, Computer Managed Instruction, Distance Education
De Ayala, R. J. – 1992
One important and promising application of item response theory (IRT) is computerized adaptive testing (CAT). The implementation of a nominal response model-based CAT (NRCAT) was studied. Item pool characteristics for the NRCAT as well as the comparative performance of the NRCAT and a CAT based on the three-parameter logistic (3PL) model were…
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Computer Simulation
Bergstrom, Betty A. – 1992
This paper reports on existing studies and uses meta analysis to compare and synthesize the results of 20 studies from 8 research reports comparing the ability measure equivalence of computer adaptive tests (CAT) and conventional paper and pencil tests. Using the research synthesis techniques developed by Hedges and Olkin (1985), it is possible to…
Descriptors: Ability, Adaptive Testing, Comparative Analysis, Computer Assisted Testing
Pages: 1  |  ...  |  415  |  416  |  417  |  418  |  419  |  420  |  421  |  422  |  423  |  ...  |  473