NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers6
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 16 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
David Eubanks; Scott A. Moore – Assessment Update, 2025
Assessment and institutional research offices have too much data and too little time. Standard reporting often crowds out opportunities for innovative research. Fortunately, advancements in data science now offer a clear solution. It is equal parts technique and philosophy. The first and easiest step is to modernize data work. This column…
Descriptors: Higher Education, Educational Assessment, Data Science, Research Methodology
Susan E. Ramlo – Sage Research Methods Cases, 2022
Q methodology (Q) offers a scientific way to study subjectivity, meaning people's viewpoints about a topic. The underlying assumption of Q is that when people share an experience, they do not necessarily form the same viewpoint about that experience. This is different from the use of Likert-scale surveys that report results using aggregate…
Descriptors: COVID-19, Pandemics, Higher Education, Q Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Stodberg, Ulf – Assessment & Evaluation in Higher Education, 2012
The use of e-assessment in higher education is a relatively new educational practice that has been more frequently studied in recent years. This review aims to summarise some research on e-assessment, providing an overview based on articles from three well-established scientific journals. Focusing on research topics, settings for e-assessment and…
Descriptors: Program Effectiveness, Online Courses, Longitudinal Studies, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Warburton, Bill – Assessment & Evaluation in Higher Education, 2009
The uptake of CAA in UK higher education (HE) on a large scale lags behind the expectations of CAA specialists. A research project was undertaken with the aim of discovering and addressing the underlying reasons for this. The research was conducted according to Strauss and Corbin's (1998) prescription for grounded theory (GT) research. During…
Descriptors: Grounded Theory, Higher Education, College Faculty, Administrators
Peer reviewed Peer reviewed
Jacobs, Ronald L.; And Others – Journal of Educational Computing Research, 1985
This study adapted the Hidden Figures Test for use on PLATO and determined the reliability of the computerized version compared to the paper and pencil version. Results indicate the test was successfully adapted with some modifications, and it was judged reliable although it may be measuring additional constructs. (MBR)
Descriptors: Computer Assisted Testing, Educational Research, Field Dependence Independence, Higher Education
Rippey, Robert M.; Voytovich, Anthony E. – Journal of Computer-Based Instruction, 1983
Describes a computer-based method of confidence-testing, available in batch processing and interactive form, which improves a student's ability to assess probabilities during clinical diagnosis. The methods and results of three experiments are presented. (EAO)
Descriptors: Clinical Diagnosis, Computer Assisted Testing, Confidence Testing, Decision Making
Peer reviewed Peer reviewed
Lukin, Mark E.; And Others – Computers in Human Behavior, 1985
This study utilized a Latin Squares design to assess equivalence of computer and paper-and-pencil testing methods in a clinical setting with college students. No significant differences between scores on measures of anxiety, depression, and psychological reactance were found across group and administration format. Most subjects preferred…
Descriptors: Comparative Analysis, Computer Assisted Testing, Higher Education, Literature Reviews
Cheskis-Gold, Rena; Loescher, Ruth; Shepard-Rabadam, Elizabeth; Carroll, Barbara – Online Submission, 2004
This paper provides an overview of the entire process necessary to developing a university-wide web survey, from the community-building process for creating support for the survey and determining the questions, to the specific tasks necessary for designing and administering an efficient web product. (Contains 17 tables.)
Descriptors: Institutional Research, Higher Education, Surveys, Computer Assisted Testing
Bobbitt, L. G.; Carroll, C. D. – 1993
The National Center for Education Statistics conducts surveys which require the coding of the respondent's major field of study. This paper presents a new system for the coding of major field of study. It operates on-line i a Computer Assisted Telephone Interview (CATI) environment and allows conversational checks to verify coding directly from…
Descriptors: Algorithms, Coding, Computer Assisted Testing, Computer Software
Boser, Judith A.; And Others – 1984
Different formats for four types of research items were studied for ease of computer data entry. The types were: (1) numeric response items; (2) individual multiple choice items; (3) multiple choice items with the same response items; and (4) card column indicator placement. Each of the 13 experienced staff members of a major university's Data…
Descriptors: Ancillary School Services, Attitude Measures, Computer Assisted Testing, Computer Oriented Programs
Sorensen, H. Barbara – AEDS Monitor, 1985
This study validated computerized versions of selected tests from the Kit of Factor-Referenced Cognitive Tests of 1976, compared performance of higher education students on computerized and paper-and-pencil versions of the tests, and explored whether gender, academic status, or age interacted with the tests. (MBR)
Descriptors: Age Differences, Cognitive Ability, Cognitive Tests, Comparative Analysis
Thompson, Bruce; Melancon, Janet G. – 1990
Effect sizes have been increasingly emphasized in research as more researchers have recognized that: (1) all parametric analyses (t-tests, analyses of variance, etc.) are correlational; (2) effect sizes have played an important role in meta-analytic work; and (3) statistical significance testing is limited in its capacity to inform scientific…
Descriptors: Comparative Analysis, Computer Assisted Testing, Correlation, Effect Size
Peer reviewed Peer reviewed
Rosenfeld, Paul; And Others – American Behavioral Scientist, 1993
Asserts that current technology makes it possible to administer organizational surveys by using computers. Describes processes of conducting a computer-administered survey. Finds that respondents who complete computer surveys find them more interesting and seem to be more aware of their thoughts and feelings while completing them. (CFR)
Descriptors: Computer Assisted Testing, Computer Software, Higher Education, Needs Assessment
Spray, Judith A.; Reckase, Mark D. – 1994
The issue of test-item selection in support of decision making in adaptive testing is considered. The number of items needed to make a decision is compared for two approaches: selecting items from an item pool that are most informative at the decision point or selecting items that are most informative at the examinee's ability level. The first…
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing
Peer reviewed Peer reviewed
Dunnington, Richard A. – American Behavioral Scientist, 1993
Asserts that three decades of technological advancements in communications and computer technology have transformed, if not revolutionized, organizational survey use and potential. Concludes that organizational clients, respondents, and survey professionals all benefit from new technological developments. (CFR)
Descriptors: Computer Assisted Testing, Computers, Higher Education, Needs Assessment
Previous Page | Next Page ยป
Pages: 1  |  2