Descriptor
| Microcomputers | 16 |
| Test Reliability | 16 |
| Test Validity | 11 |
| Computer Assisted Testing | 10 |
| Test Construction | 7 |
| Computer Software | 5 |
| Difficulty Level | 4 |
| Higher Education | 4 |
| Item Analysis | 4 |
| Test Items | 4 |
| Attitude Measures | 3 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 3 |
| Researchers | 2 |
| Policymakers | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
| Computer Attitude Scale | 1 |
What Works Clearinghouse Rating
Peer reviewedPalumbo, David B.; Reed, W. Michael – Journal of Research on Computing in Education, 1989
Discussion of the use of microcomputers in education focuses on the use of microcomputers for evaluation purposes. A study that collected performance data on both college students and the evaluation instrument is described, reliability and validity of the test are discussed, and the use of microcomputers to redesign instructional systems is…
Descriptors: Academic Achievement, Computer Assisted Instruction, Computer Assisted Testing, Computer Managed Instruction
Peer reviewedReece, Mary J.; Gable, Robert K. – Educational and Psychological Measurement, 1982
A 10 item Attitudes Toward Computers instrument was developed to measure the attitudes of students toward the use of computers. A factorial validity study revealed one identifiable factor dimension entitled General Attitude Toward Computers with an estimated alpha internal consistency reliability of .87. (Author/PN)
Descriptors: Attitude Measures, Computer Assisted Instruction, Computers, Factor Structure
Thompson, Bruce; Levitov, Justin E. – Collegiate Microcomputer, 1985
Discusses features of a microcomputer program, SCOREIT, used at New Orleans' Loyola University and several high schools to score and analyze test results. Benefits and dimensions of the program's automated test and item analysis are outlined, and several examples illustrating test and item analyses by SCOREIT are presented. (MBR)
Descriptors: Computer Assisted Testing, Computer Software, Difficulty Level, Higher Education
Peer reviewedKimball, James C. – Journal of Employment Counseling, 1988
Developed paper-and-pencil and microcomputer versions of prototype occupational interest inventory for academically disadvantaged or functionally illiterate adults. Compared results obtained from 30 such adults on the United States Employment Service Interest Inventory and both versions of the prototype inventory. Results revealed acceptable…
Descriptors: Adult Literacy, Adults, Comparative Testing, Computer Assisted Testing
Peer reviewedDelcourt, Marcia A. B.; Kinzie, Mable B. – Journal of Research and Development in Education, 1993
Describes the development of two instruments for use with preservice and practicing teachers: Attitudes toward Computer Technologies and Self-Efficacy for Computer Technologies. Graduate and undergraduate students completed the instruments. Results provide initial instrument validation. The paper presents data on content validity and results of…
Descriptors: Computer Attitudes, Computer Uses in Education, Education Majors, Evaluation Methods
Peer reviewedDuffelmeyer, Frederick A. – Reading Teacher, 1985
Argues that, while computers make the computation of readability formulas easy, teachers should not forget the need to apply judgment and common sense to the results. Discusses RIXRATE, a computerized version of the Rix index and compares its performance to that of the Rauding Scale. (FL)
Descriptors: Computer Assisted Testing, Elementary Secondary Education, Microcomputers, Readability
Peer reviewedShriberg, Lawrence D.; And Others – Journal of Speech and Hearing Disorders, 1986
Speech-delayed preschool children's (N=21) responses to booklet-presented pictures (Photo Articulation Test) were compared to similar microcomputer-presented graphics with a variety of screen formats. (CB)
Descriptors: Articulation (Speech), Computer Software, Disability Identification, Microcomputers
Peer reviewedArmstrong, Ronald D.; And Others – Journal of Educational Statistics, 1994
A network-flow model is formulated for constructing parallel tests based on classical test theory while using test reliability as the criterion. Practitioners can specify a test-difficulty distribution for values of item difficulties as well as test-composition requirements. An empirical study illustrates the reliability of generated tests. (SLD)
Descriptors: Algorithms, Computer Assisted Testing, Difficulty Level, Item Banks
PDF pending restorationCobern, William W. – 1986
This computer program, written in BASIC, performs three different calculations of test reliability: (1) the Kuder-Richardson method; (2); the "common split-half" method; and (3) the Rulon-Guttman split-half method. The program reads sequential access data files for microcomputers that have been set up by statistical packages such as…
Descriptors: Computer Software, Difficulty Level, Educational Research, Equations (Mathematics)
McLeod, John – Australian Journal of Reading, 1983
Illustrates ways to link the motivating power and efficiency of the computer to the effectiveness and necessity of traditional teaching practice and criterion referenced testing in order to assess and teach spelling. (FL)
Descriptors: Computer Assisted Testing, Criterion Referenced Tests, Microcomputers, Motivation Techniques
Peer reviewedNelson, Larry R. – Educational Measurement: Issues and Practice, 1984
The author argues that scoring, reporting, and deriving final grades can be considerably assisted by using a computer. He also contends that the savings in time and the computer database formed will allow instructors to determine test quality and reflect on the quality of instruction. (BW)
Descriptors: Achievement Tests, Affective Objectives, Computer Assisted Testing, Educational Testing
Peer reviewedMeier, Scott T. – Computers in Human Behavior, 1988
Description of the development of a theoretically based instrument--the Computer Aversion Scale (CAVS)--to measure negative reactions to computers, focuses on the use of microcomputers in the mental health field. Previous efforts to assess computer anxiety are reviewed, and studies testing the reliability and validity of the CAVS are described.…
Descriptors: Computer Assisted Testing, Concurrent Validity, Correlation, Factor Analysis
PDF pending restorationKluever, Raymond C.; And Others – 1992
This study investigates the Computer Attitude Scale (CAS) in terms of reliability, factorial validity, and fit to a unidimensional model. The sample for this study consisted of 265 teachers from 20 schools and school districts in one state who attended evening and weekend classes that emphasized teachers teaching teachers about classroom…
Descriptors: Attitude Measures, Computer Literacy, Correlation, Elementary Secondary Education
Larson, Jerry W. – 1985
A study at Brigham Young University (Utah) investigated the feasibility of computer-assisted language placement testing in higher education. Benefits and problems of this approach for test administration, individualization of item selection, and recordkeeping were examined. Four steps were followed in production of a test for Spanish placement:…
Descriptors: College Second Language Programs, Computer Assisted Testing, Higher Education, Language Tests
Peer reviewedAbdel-Gaid, Samiha; And Others – Journal of Research in Science Teaching, 1986
Describes a study designed to develop a systematic procedure for constructing a valid Likert attitude scale. Discusses the 15-step procedure that was produced. Reviews the resulting development of an instrument designed to test the attitude of teachers toward the use of microcomputers in the classroom. (TW)
Descriptors: Attitude Measures, Computer Uses in Education, Educational Attitudes, Elementary Education
Previous Page | Next Page ยป
Pages: 1 | 2

