NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)0
Since 2006 (last 20 years)9
Audience
Researchers1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Rodgers, Joseph Lee; Rodgers, Jacci L. – Journal of Continuing Higher Education, 2011
We propose, develop, and evaluate the black ink-red ink (BIRI) method of testing. This approach uses two different methods within the same test administration setting, one that matches recognition learning and the other that matches recall learning. Students purposively define their own tradeoff between the two approaches. Evaluation of the method…
Descriptors: Testing, Test Anxiety, Recall (Psychology), Recognition (Psychology)
Peer reviewed Peer reviewed
Direct linkDirect link
Rhemtulla, Mijke; Brosseau-Liard, Patricia E.; Savalei, Victoria – Psychological Methods, 2012
A simulation study compared the performance of robust normal theory maximum likelihood (ML) and robust categorical least squares (cat-LS) methodology for estimating confirmatory factor analysis models with ordinal variables. Data were generated from 2 models with 2-7 categories, 4 sample sizes, 2 latent distributions, and 5 patterns of category…
Descriptors: Factor Analysis, Computation, Simulation, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Heath, Barbara; Lakshmanan, Aruna; Perlmutter, Aaron; Davis, Lori – International Journal of Research & Method in Education, 2010
Instrument choice is a crucial part of evaluation of professional development programmes. The use of multiple evaluation methods helps in triangulation, and offers insight into the developmental sequence involved in the changes in teacher beliefs and practice. Most current instruments are self-contained and not designed for use in conjunction with…
Descriptors: Evaluation Needs, Literature Reviews, Evaluation Methods, Professional Development
Peer reviewed Peer reviewed
Direct linkDirect link
Sadler, D. Royce – Assessment & Evaluation in Higher Education, 2009
When assessment tasks are set for students in universities and colleges, a common practice is to advise them of the criteria that will be used for grading their responses. Various schemes for using multiple criteria have been widely advocated in the literature. Each scheme is designed to offer clear benefits for students. Breaking down holistic…
Descriptors: Student Evaluation, Grading, Evaluation Criteria, Evaluation Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Huxham, Mark; Laybourn, Phyllis; Cairncross, Sandra; Gray, Morag; Brown, Norrie; Goldfinch, Judy; Earl, Shirley – Assessment & Evaluation in Higher Education, 2008
A study was conducted comparing the feedback received from students about teaching obtained using different instruments. Twelve first- and second-year undergraduate modules were selected from seven different schools within a single university. Students studying each module were allocated to "questionnaire" and "comparator" groups. "Questionnaire"…
Descriptors: Feedback (Response), Formative Evaluation, Focus Groups, Questionnaires
Peer reviewed Peer reviewed
Direct linkDirect link
Knoeppel, Robert C.; Rinehart, James S. – Educational Considerations, 2008
The purpose of this study was to compare multiple regression with canonical analysis in order to introduce a new, policy relevant methodology to the literature on production functions. Findings from this study confirmed the results of past inquiries that found a relationship between the inputs to schooling and measures of student achievement. A…
Descriptors: Academic Achievement, Literature Reviews, Multiple Regression Analysis, Predictor Variables
Peer reviewed Peer reviewed
Direct linkDirect link
Muis, Krista R.; Winne, Philip H.; Jamieson-Noel, Dianne – British Journal of Educational Psychology, 2007
Background: A programme of construct validity research is necessary to clarify previous research on self-regulation and to provide a stronger basis for future research. Aim: A multitrait-multimethod (MTMM) analysis was conducted to assess convergent and discriminant validity of three self-regulation measures: the Learning and Study Strategies…
Descriptors: Validity, Multitrait Multimethod Techniques, Construct Validity, Higher Education
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Patry, Jean-Luc – Quality of Higher Education, 2005
Quality assurance and evaluation research, like other fields of social research and its application, are confronted with a series of problems. In the present paper, I want first to give a list of such problems, although necessarily incomplete. It is then claimed that while there is no "perfect" solution to these problems, critical multiplism may…
Descriptors: Evaluation Research, Research Problems, Guidelines, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Riley, Nigel R. – Technology, Pedagogy and Education, 2006
This study is concerned with evaluating critical learning of 10-11 year-old students studying global citizenship through an online discussion environment. The evaluation is based on analysing the use of language as both a social reasoning tool and cognitive learning tool. Evaluation methods use: (1) content analysis of the online discussion…
Descriptors: Language Usage, Concept Mapping, Computer Mediated Communication, Content Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Brennan, Robert T.; Molnar, Beth E.; Earls, Felton – Journal of Community Psychology, 2007
Correlational analysis, classical test theory, confirmatory factor analysis, and multilevel Rasch modeling were used to refine a measure of adolescents' exposure to violence (ETV). Interpersonal violence could be distinguished from other potentially traumatic events; it was also possible to distinguish three routes of exposure (victimization,…
Descriptors: Violence, Adolescents, Factor Analysis, Urban Youth