NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 86 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Cui, Ying; Chen, Fu; Lutsyk, Alina; Leighton, Jacqueline P.; Cutumisu, Maria – Assessment in Education: Principles, Policy & Practice, 2023
With the exponential increase in the volume of data available in the 21st century, data literacy skills have become vitally important in work places and everyday life. This paper provides a systematic review of available data literacy assessments targeted at different audiences and educational levels. The results can help researchers and…
Descriptors: Data, Information Literacy, 21st Century Skills, Competence
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Thomas, Jason E.; Hornsey, Philip E. – Journal of Instructional Research, 2014
Formative Classroom Assessment Techniques (CAT) have been well-established instructional tools in higher education since their exposition in the late 1980s (Angelo & Cross, 1993). A large body of literature exists surrounding the strengths and weaknesses of formative CATs. Simpson-Beck (2011) suggested insufficient quantitative evidence exists…
Descriptors: Classroom Techniques, Nontraditional Education, Adult Education, Formative Evaluation
Jin, Yan – Journal of Pan-Pacific Association of Applied Linguistics, 2011
The College English Test (CET) is an English language test designed for educational purposes, administered on a very large scale, and used for making high-stakes decisions. This paper discusses the key issues facing the CET during the course of its development in the past two decades. It argues that the most fundamental and critical concerns of…
Descriptors: High Stakes Tests, Language Tests, Measures (Individuals), Graduates
Peer reviewed Peer reviewed
Price, James H.; And Others – Journal of School Health, 1985
This study examined the validity and reliability of a short obesity knowledge scale. A 12-item test was developed covering etiology of obesity, diseases related to obesity, weight loss techniques, and general information on obesity. Four test formats were compared, revealing that the scale needs further validation. (Author/MT)
Descriptors: Dietetics, Health Education, Higher Education, Norm Referenced Tests
Peer reviewed Peer reviewed
Schriesheim, Chester A.; Hill, Kenneth D. – Educational and Psychological Measurement, 1981
The empirical evidence does not support the prevailing conventional wisdom that it is advisable to mix positively and negatively worded items in psychological measures to counteract acquiescence response bias. An experiment, evaluating subjects' ability to respond accurately to both positive and reversed items on a questionnaire, analyzed post-hoc…
Descriptors: Bias, Higher Education, Questionnaires, Response Style (Tests)
Peer reviewed Peer reviewed
Vansickle, Timothy R.; And Others – Measurement and Evaluation in Counseling and Development, 1989
Examined the equivalence of two versions of the Strong-Campbell Interest Inventory (SCII) using four combinations of paper-and-pencil and computer administrations with college student subjects (N=75). Found slightly better test-retest reliability for the computer-based SCII. (Author/ABL)
Descriptors: College Students, Computer Assisted Testing, Higher Education, Interest Inventories
Peer reviewed Peer reviewed
Schriesheim, Chester A.; And Others – Applied Psychological Measurement, 1989
LISREL maximum likelihood confirmatory factor analyses assessed the effects of grouped and random formats on convergent and discriminant validity of two sets of questionnaires--job characteristics scales and satisfaction measures--each administered to 80 college students. The grouped format was superior, and the usefulness of LISREL confirmatory…
Descriptors: College Students, Higher Education, Measures (Individuals), Questionnaires
Crehan, Kevin; Haladyna, Thomas M. – 1989
The present study involved the testing of two common multiple-choice item writing rules. A recent review of research revealed that much of the advice given for writing multiple-choice test items is based on experience and wisdom rather than on empirical research. The rules assessed in this study include: (1) the phrasing of the stem in the form of…
Descriptors: College Students, Higher Education, Multiple Choice Tests, Psychology
Peer reviewed Peer reviewed
Owen, Steven V.; Froman, Robin D. – Educational and Psychological Measurement, 1987
To test further for efficacy of three-option achievement items, parallel three- and five-option item tests were distributed randomly to college students. Results showed no differences in mean item difficulty, mean discrimination or total test score, but a substantial reduction in time spent on three-option items. (Author/BS)
Descriptors: Achievement Tests, Higher Education, Multiple Choice Tests, Test Format
Peer reviewed Peer reviewed
Pratt, C.; Hacker, R. G. – Educational and Psychological Measurement, 1984
A unidimensional latent trait model was used to test a single-factor hypothesis of the Lawson Classroom Test of Formal Reasoning. The test failed to provide a valid measure of formal reasoning. This was a result of test format which neglected aspects of formal reasoning emphasized by Inhelder and Piaget. (Author/DWH)
Descriptors: Cognitive Processes, Group Testing, Higher Education, Latent Trait Theory
Fishman, Judith – Writing Program Administration, 1984
Examines the CUNY-WAT program and questions many aspects of it, especially the choice and phrasing of topics. (FL)
Descriptors: Essay Tests, Higher Education, Test Format, Test Items
Peer reviewed Peer reviewed
Cziko, Gary A. – TESOL Quarterly, 1982
Describes an attempt to construct an ESL dictation test that would: (1) be appropriate for a wide range of ability, (2) be easy and fast to score, (3) consist of set items that would form both a unidimensional and cumulative scale, and (4) yield scores that would be directly interpretable with respect to specified levels of English proficiency.…
Descriptors: Criterion Referenced Tests, English (Second Language), Higher Education, Scores
Peer reviewed Peer reviewed
Ward, William C.; And Others – Journal of Educational Measurement, 1980
Free response and machine-scorable versions of a test called Formulating Hypotheses were compared with respect to construct validity. Results indicate that the different forms involve different cognitive processes and measure different qualities. (Author/JKS)
Descriptors: Cognitive Processes, Cognitive Tests, Higher Education, Personality Traits
Peer reviewed Peer reviewed
Melancon, Janet G.; Thompson, Bruce – Psychology in the Schools, 1989
Investigated measurement characteristics of both forms of Finding Embedded Figures Test (FEFT). College students (N=302) completed both forms of FEFT or one form of FEFT and Group Embedded Figures Test. Results suggest that FEFT forms provide reasonable reliable and valid data. (Author/NB)
Descriptors: College Students, Field Dependence Independence, Higher Education, Multiple Choice Tests
Peer reviewed Peer reviewed
Kumar, V. K.; And Others – Measurement and Evaluation in Counseling and Development, 1986
Disguising scale purpose by using an innocuous skill title and filler items had no effect on the reliability and validity of Rotter's Interpersonal Trust Scale. (Author)
Descriptors: College Students, Higher Education, Response Style (Tests), Student Attitudes
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6