NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Does not meet standards1
Showing 2,281 to 2,295 of 3,316 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ferrao, Maria – Assessment & Evaluation in Higher Education, 2010
The Bologna Declaration brought reforms into higher education that imply changes in teaching methods, didactic materials and textbooks, infrastructures and laboratories, etc. Statistics and mathematics are disciplines that traditionally have the worst success rates, particularly in non-mathematics core curricula courses. This research project,…
Descriptors: Foreign Countries, Computer Assisted Testing, Educational Technology, Educational Assessment
Froman, Terry; Brown, Shelly; Luzon-Canasi, Angela – Research Services, Miami-Dade County Public Schools, 2008
This study duplicated the procedures used by Greene and Winters (2006) on data from the Miami-Dade school system with the advantage of an additional two year's worth of information. The results indicated that the effects of the retention policy are far from clear and arguably negative. There is considerable evidence to suggest that the apparent…
Descriptors: Grade Repetition, School Holding Power, Evidence, Educational Policy
Peer reviewed Peer reviewed
Direct linkDirect link
Lam, Tony C. M.; Kolic, Mary – Applied Psychological Measurement, 2008
Semantic incompatibility, an error in constructing measuring instruments for rating oneself, others, or objects, refers to the extent to which item wordings are incongruent with, and hence inappropriate for, scale labels and vice versa. This study examines the effects of semantic incompatibility on rating responses. Using a 2 x 2 factorial design…
Descriptors: Semantics, Rating Scales, Statistical Analysis, Academic Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Hanna – Science Activities: Classroom Projects and Curriculum Ideas, 2008
Testing the pH of various liquids is one of the most popular activities in 5th- through 8th-grade classrooms. The author presents an extensive pH-testing lesson based on a 5E (engagement, exploration, explanation, extension, and evaluation) teaching model. The activity provides students with the opportunity to learn about pH and how it relates to…
Descriptors: Scientific Research, Teaching Models, Error of Measurement, Science Instruction
Wang, Lin; Fan, Xitao – 1997
Standard statistical methods are used to analyze data that is assumed to be collected using a simple random sampling scheme. These methods, however, tend to underestimate variance when the data is collected with a cluster design, which is often found in educational survey research. The purposes of this paper are to demonstrate how a cluster design…
Descriptors: Cluster Analysis, Educational Research, Error of Measurement, Estimation (Mathematics)
Chang, Te-Sheng; Brookshire, William – 1997
The question of least-squares weights versus equal weights has been a subject of great interest to researchers for over 60 years. Several researchers have compared the efficiency of equal weights and that of least-squares weights under different conditions. Recently, S. V. Paunonen and R. C. Gardner stressed that the necessary and sufficient…
Descriptors: Correlation, Error of Measurement, Least Squares Statistics, Predictor Variables
Tritchler, D. L.; Pedrini, D. T. – 1983
The N=1 analysis differs from a typical analysis of variance in that there is no within-cell error term. Thus interaction terms are used as estimates of error variance. If the interaction term in question represents a significant interaction, the F tests will be conservative. Tukey's test for nonadditivity will detect a common form of interaction.…
Descriptors: Analysis of Variance, Computer Programs, Data Analysis, Error of Measurement
Kolen, Michael J. – 1984
Large sample standard errors for the Tucker method of linear equating under the common item nonrandom groups design are derived under normality assumptions as well as under less restrictive assumptions. Standard errors of Tucker equating are estimated using the bootstrap method described by Efron. The results from different methods are compared…
Descriptors: Certification, Comparative Analysis, Equated Scores, Error of Measurement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Moses, Tim – ETS Research Report Series, 2006
Population invariance is an important requirement of test equating. An equating function is said to be population invariant when the choice of (sub)population used to compute the equating function does not matter. In recent studies, the extent to which equating functions are population invariant is typically addressed in terms of practical…
Descriptors: Equated Scores, Computation, Error of Measurement, Statistical Analysis
Lord, Frederic M. – 1981
A formula is derived for the asymptotic standard error of a true-score equating by item response theory (IRT). The equating method is applicable when the two tests to be equated are administered to different groups along with an "anchor test." Numerical standard errors are shown for an actual equating 1) comparing the standard errors of…
Descriptors: Comparative Analysis, Equated Scores, Error of Measurement, Latent Trait Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Pizlo, Zygmunt; Stefanov, Emil; Saalweachter, John; Li, Zheng; Haxhimusa, Yll; Kropatsch, Walter G. – Journal of Problem Solving, 2006
We tested human performance on the Euclidean Traveling Salesman Problem using problems with 6-50 cities. Results confirmed our earlier findings that: (a) the time of solving a problem is proportional to the number of cities, and (b) the solution error grows very slowly with the number of cities. We formulated a new version of a pyramid model. The…
Descriptors: Problem Solving, Models, Mathematics, Visual Perception
PDF pending restoration PDF pending restoration
Hunyh, Hunyh; Saunders, Joseph C. – 1979
Comparisons were made among various methods of estimating the reliability of pass-fail decisions based on mastery tests. The reliability indices that are considered are p, the proportion of agreements between two estimates, and kappa, the proportion of agreements corrected for chance. Estimates of these two indices were made on the basis of…
Descriptors: Cutting Scores, Error of Measurement, Mastery Tests, Reliability
Peer reviewed Peer reviewed
Forsyth, Robert A. – Applied Psychological Measurement, 1978
This note shows that, under conditions specified by Levin and Subkoviak (TM 503 420), it is not necessary to specify the reliabilities of observed scores when comparing completely randomized designs with randomized block designs. Certain errors in their illustrative example are also discussed. (Author/CTM)
Descriptors: Analysis of Variance, Error of Measurement, Hypothesis Testing, Reliability
Peer reviewed Peer reviewed
Levin, Joel R.; Subkoviak, Michael J. – Applied Psychological Measurement, 1978
Comments (TM 503 706) on an earlier article (TM 503 420) concerning the comparison of the completely randomized design and the randomized block design are acknowledged and appreciated. In addition, potentially misleading notions arising from these comments are addressed and clarified. (See also TM 503 708). (Author/CTM)
Descriptors: Analysis of Variance, Error of Measurement, Hypothesis Testing, Reliability
Peer reviewed Peer reviewed
Forsyth, Robert A. – Applied Psychological Measurement, 1978
This note continues the discussion of earlier articles (TM 503 420, TM 503 706, and TM 503 707), comparing the completely randomized design with the randomized block design. (CTM)
Descriptors: Analysis of Variance, Error of Measurement, Hypothesis Testing, Reliability
Pages: 1  |  ...  |  149  |  150  |  151  |  152  |  153  |  154  |  155  |  156  |  157  |  ...  |  222