NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 1 to 15 of 37 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Walter M. Stroup; Anthony Petrosino; Corey Brady; Karen Duseau – North American Chapter of the International Group for the Psychology of Mathematics Education, 2023
Tests of statistical significance often play a decisive role in establishing the empirical warrant of evidence-based research in education. The results from pattern-based assessment items, as introduced in this paper, are categorical and multimodal and do not immediately support the use of measures of central tendency as typically related to…
Descriptors: Statistical Significance, Comparative Analysis, Research Methodology, Evaluation Methods
Ingrid Nix; Marion Hall – Sage Research Methods Cases, 2016
This case study describes a method of collecting data on students' experiences of developing digital literacy (information and communications technology) skills as part of their course at the United Kingdom's Open University. An online reflective quiz was integrated into three health and social care modules, offering students the opportunity both…
Descriptors: Foreign Countries, Questionnaires, Interviews, Data Collection
Peer reviewed Peer reviewed
Direct linkDirect link
Hitchcock, John H.; Johanson, George A. – Research in the Schools, 2015
Understanding the reason(s) for Differential Item Functioning (DIF) in the context of measurement is difficult. Although identifying potential DIF items is typically a statistical endeavor, understanding the reasons for DIF (and item repair or replacement) might require investigations that can be informed by qualitative work. Such work is…
Descriptors: Mixed Methods Research, Test Items, Item Analysis, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Gehlbach, Hunter – Journal of Early Adolescence, 2015
As pressure builds to assess students, teachers, and schools, educational practitioners and policy makers are increasingly looking toward student perception surveys as a promising means to collect high-quality, useful data. For instance, the widely cited Measures of Effective Teaching study lists student perception surveys as one of the three key…
Descriptors: Surveys, Evaluation Methods, Early Adolescents, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Koehn, Peter H.; Uitto, Juha I. – Higher Education: The International Journal of Higher Education and Educational Planning, 2014
Since the mid 1970s, a series of international declarations that recognize the critical link between environmental sustainability and higher education have been endorsed and signed by universities around the world. While academic initiatives in sustainability are blossoming, higher education lacks a comprehensive evaluation framework that is…
Descriptors: Sustainability, Program Evaluation, Curriculum Evaluation, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Houssart, Jenny; Barber, Patti – Education 3-13, 2014
This article considers various approaches to consulting primary pupils about mathematics. This is done first through a literature review and second by drawing on our experience of designing and piloting pupil consultation in collaboration with staff in one primary school. Our concern is with the utility and drawbacks of the methods used rather…
Descriptors: Elementary School Mathematics, Elementary School Students, Literature Reviews, Consultation Programs
Peer reviewed Peer reviewed
Direct linkDirect link
Stuive, Ilse; Kiers, Henk A. L.; Timmerman, Marieke E. – Educational and Psychological Measurement, 2009
A common question in test evaluation is whether an a priori assignment of items to subtests is supported by empirical data. If the analysis results indicate the assignment of items to subtests under study is not supported by data, the assignment is often adjusted. In this study the authors compare two methods on the quality of their suggestions to…
Descriptors: Simulation, Item Response Theory, Test Items, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Klein Entink, R. H.; Fox, J. P.; van der Linden, W. J. – Psychometrika, 2009
Response times on test items are easily collected in modern computerized testing. When collecting both (binary) responses and (continuous) response times on test items, it is possible to measure the accuracy and speed of test takers. To study the relationships between these two constructs, the model is extended with a multivariate multilevel…
Descriptors: Test Items, Markov Processes, Item Response Theory, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Miyazaki, Kei; Hoshino, Takahiro; Mayekawa, Shin-ichi; Shigemasu, Kazuo – Psychometrika, 2009
This study proposes a new item parameter linking method for the common-item nonequivalent groups design in item response theory (IRT). Previous studies assumed that examinees are randomly assigned to either test form. However, examinees can frequently select their own test forms and tests often differ according to examinees' abilities. In such…
Descriptors: Test Format, Item Response Theory, Test Items, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Mels, Gerhard – Structural Equation Modeling: A Multidisciplinary Journal, 2009
A readily implemented procedure is discussed for interval estimation of indexes of interrelationship between items from multiple-component measuring instruments as well as between items and total composite scores. The method is applicable with categorical (ordinal) observed variables, and can be widely used in the process of scale construction,…
Descriptors: Intervals, Structural Equation Models, Biomedicine, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Wilhelm, Oliver; Robitzsch, Alexander – Measurement: Interdisciplinary Research and Perspectives, 2009
The paper by Rupp and Templin (2008) is an excellent work on the characteristics and features of cognitive diagnostic models (CDM). In this article, the authors comment on some substantial and methodological aspects of this focus paper. They organize their comments by going through issues associated with the terms "cognitive,"…
Descriptors: Research Methodology, Test Items, Models, Diagnostic Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Davison, Mark L.; Kim, Se-Kang; Close, Catherine – Multivariate Behavioral Research, 2009
A profile is a vector of scores for one examinee. The mean score in the vector can be interpreted as a measure of overall profile height, the variance can be interpreted as a measure of within person variation, and the ipsatized vector of score deviations about the mean can be said to describe the pattern in the score profile. A within person…
Descriptors: Vocational Interests, Interest Inventories, Profiles, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Yoo, Jin Eun – Educational and Psychological Measurement, 2009
This Monte Carlo study investigates the beneficiary effect of including auxiliary variables during estimation of confirmatory factor analysis models with multiple imputation. Specifically, it examines the influence of sample size, missing rates, missingness mechanism combinations, missingness types (linear or convex), and the absence or presence…
Descriptors: Monte Carlo Methods, Research Methodology, Test Validity, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Shujuan, Wang; Meihua, Qian; Jianxin, Zhang – Journal of Psychoeducational Assessment, 2009
This article examines the psychometric structure of the Anxiety Control Questionnaire (ACQ) in Chinese adolescents. With the data collected from 212 senior high school students (94 females, 110 males, 8 unknown), seven models are tested using confirmatory factor analyses in the framework of the multitrait-multimethod strategy. Results indicate…
Descriptors: Multitrait Multimethod Techniques, Factor Structure, Adolescents, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Cui, Ying; Leighton, Jacqueline P. – Journal of Educational Measurement, 2009
In this article, we introduce a person-fit statistic called the hierarchy consistency index (HCI) to help detect misfitting item response vectors for tests developed and analyzed based on a cognitive model. The HCI ranges from -1.0 to 1.0, with values close to -1.0 indicating that students respond unexpectedly or differently from the responses…
Descriptors: Test Length, Simulation, Correlation, Research Methodology
Previous Page | Next Page ยป
Pages: 1  |  2  |  3