ERIC Number: ED497448
Record Type: Non-Journal
Publication Date: 2005-Feb-5
Pages: 18
Abstractor: Author
ISBN: N/A
ISSN: N/A
EISSN: N/A
Available Date: N/A
Can We Really Trust Anyone Who Profits from Ranking Higher Education Institutions, or How Would One Evaluate Institutional Quality?
Micceri, Ted
Online Submission
Although numerous quality ratings exist in today's media-centric environment (Money Magazine, and U.S. News and World Report, etc.), it is quite difficult to provide any reasonably meaningful estimates of institutional quality, either qualitative or quantitative. Global ratings of university "quality" abound, despite the fact that there is really no such thing as a university, but rather, merely heterogeneous collections of programs and services that can vary substantially in all aspects of quality and reputation within the same school. Usually, quality is associated with inputs (students, funding, etc.), processes (faculty/student ratios, average class sizes, etc.) or outputs (graduates, highly cited scholars, research awards, patents developed, etc.). Some (e.g. U.S. News and World Report) add a ridiculously designed reputation measure, while others (NRC, 1995) use thoroughly considered and well constructed reputation methods. Unfortunately, as in all endeavors, the well-designed methods tend to be costly and time consuming, and therefore quite limited in scope, while the poorly-designed measures are comparatively easy to create and, for magazines at least, profitable, so they tend to occur annually and apply to all schools. So, how does one meaningfully asses the complex construct of institutional effectiveness or quality at pursuing conjoint missions of education, research and community service. This study evaluates, discusses and conducts analyses relating to several possibly indicators of institutional quality within the context of Florida public higher education, including the State University System and the public community college system. Generally, processes and outputs are preferable to inputs when defining an institution's quality. However, according to numerous researchers (Harvey, L. and Green, D., 1993; Astin, 1990; Barnett, 1988; CNAA, 1990) much of what a student walks away from college with is already present at college entry. Therefore, as factors, two entry selectivity factors (GPA and test scores), were included along with Faculty/Student ratios as processes, and outputs including, for community colleges, enrollment in SUS institutions and performance in the same, and for SUS institutions, research expenditures, NRC ratings, faculty awards, and the difference between expected and actual graduation rates of students. These measures are reported for community colleges and SUS institutions as an example of what such measures might report, although no attempt is made to create a single, global estimate of "quality," because this appears to be a totally inappropriate and meaningless aim. (Contains 1 figure and 10 tables.) [This report represents an Internal Technical Report, Office of Planning and Analysis, University of South Florida, Tampa, Florida.]
Publication Type: Numerical/Quantitative Data; Reports - Evaluative
Education Level: Higher Education; Two Year Colleges
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Identifiers - Location: Florida
Grant or Contract Numbers: N/A
Author Affiliations: N/A