NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 916 to 930 of 3,128 results Save | Export
Peters, Joshua A. – ProQuest LLC, 2016
There is a lack of knowledge in whether there is a difference in results for students on paper and pencil high stakes assessments and computer-based high stakes assessments when considering race and/or free and reduced lunch status. The purpose of this study was to add new knowledge to this field of study by determining whether there is a…
Descriptors: Comparative Analysis, Computer Assisted Testing, Lunch Programs, High Stakes Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Christ, Tanya; Chiu, Ming Ming; Currie, Ashelin; Cipielewski, James – Reading Psychology, 2014
This study tested how 53 kindergarteners' expressions of depth of vocabulary knowledge and use in novel contexts were related to in-context and out-of-context test formats for 16 target words. Applying multilevel, multi-categorical Logit to all 1,696 test item responses, the authors found that kindergarteners were more likely to express deep…
Descriptors: Correlation, Test Format, Kindergarten, Vocabulary Development
Peer reviewed Peer reviewed
PDF on ERIC Download full text
McIntyre, Joe; Gehlbach, Hunter – Society for Research on Educational Effectiveness, 2014
Of all the approaches to collecting data in the social sciences, the administration of questionnaires to respondents is among the most prevalent. Despite their popularity, there is broad consensus among survey design experts that using these items introduces excessive error into respondents' ratings. The authors attempt to answer the following…
Descriptors: Questionnaires, Surveys, Likert Scales, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Logan, Tracy – Mathematics Education Research Journal, 2015
Mathematics assessment and testing are increasingly situated within digital environments with international tests moving to computer-based testing in the near future. This paper reports on a secondary data analysis which explored the influence the mode of assessment--computer-based (CBT) and pencil-and-paper based (PPT)--and visuospatial ability…
Descriptors: Visual Perception, Spatial Ability, Mathematics Instruction, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Sinharay, Sandip – Journal of Educational and Behavioral Statistics, 2015
Person-fit assessment may help the researcher to obtain additional information regarding the answering behavior of persons. Although several researchers examined person fit, there is a lack of research on person-fit assessment for mixed-format tests. In this article, the lz statistic and the ?2 statistic, both of which have been used for tests…
Descriptors: Test Format, Goodness of Fit, Item Response Theory, Bayesian Statistics
Knudson, Joel; Hannan, Stephanie; O'Day, Jennifer; Castro, Marina – California Collaborative on District Reform, 2015
The Common Core State Standards represent an exciting step forward for California, and for the nation as a whole, in supporting instruction that can better prepare students for college and career success. Concurrent with the transition to the new standards, the Smarter Balanced Assessment Consortium (SBAC), of which California is a governing…
Descriptors: Academic Standards, State Standards, Measurement, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Jaeger, Martin; Adair, Desmond – European Journal of Engineering Education, 2017
Online quizzes have been shown to be effective learning and assessment approaches. However, if scenario-based online construction safety quizzes do not include time pressure similar to real-world situations, they reflect situations too ideally. The purpose of this paper is to compare engineering students' performance when carrying out an online…
Descriptors: Engineering Education, Quasiexperimental Design, Tests, Academic Achievement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tarun, Prashant; Krueger, Dale – Journal of Learning in Higher Education, 2016
In the United States System of Education the growth of student evaluations from 1973 to 1993 has increased from 29% to 86% which in turn has increased the importance of student evaluations on faculty retention, tenure, and promotion. However, the impact student evaluations have had on student academic development generates complex educational…
Descriptors: Critical Thinking, Teaching Methods, Multiple Choice Tests, Essay Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Hoshino, Yuko – Language Testing in Asia, 2013
This study compares the effect of different kinds of distractors on the level of difficulty of multiple-choice (MC) vocabulary tests in sentential contexts. This type of test is widely used in practical testing but it has received little attention so far. Furthermore, although distractors, which represent the unique characteristics of MC tests,…
Descriptors: Vocabulary Development, Comparative Analysis, Difficulty Level, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wen-Chung; Chen, Hui-Fang; Jin, Kuan-Yu – Educational and Psychological Measurement, 2015
Many scales contain both positively and negatively worded items. Reverse recoding of negatively worded items might not be enough for them to function as positively worded items do. In this study, we commented on the drawbacks of existing approaches to wording effect in mixed-format scales and used bi-factor item response theory (IRT) models to…
Descriptors: Item Response Theory, Test Format, Language Usage, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Yarnell, Jordy B.; Pfeiffer, Steven I. – Journal of Psychoeducational Assessment, 2015
The present study examined the psychometric equivalence of administering a computer-based version of the Gifted Rating Scale (GRS) compared with the traditional paper-and-pencil GRS-School Form (GRS-S). The GRS-S is a teacher-completed rating scale used in gifted assessment. The GRS-Electronic Form provides an alternative method of administering…
Descriptors: Gifted, Psychometrics, Rating Scales, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ghaderi, Marzieh; Mogholi, Marzieh; Soori, Afshin – International Journal of Education and Literacy Studies, 2014
Testing subject has many subsets and connections. One important issue is how to assess or measure students or learners. What would be our tools, what would be our style, what would be our goal and so on. So in this paper the author attended to the style of testing in school and other educational settings. Since the purposes of educational system…
Descriptors: Testing, Testing Programs, Intermode Differences, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Thomas, Jason E.; Hornsey, Philip E. – Journal of Instructional Research, 2014
Formative Classroom Assessment Techniques (CAT) have been well-established instructional tools in higher education since their exposition in the late 1980s (Angelo & Cross, 1993). A large body of literature exists surrounding the strengths and weaknesses of formative CATs. Simpson-Beck (2011) suggested insufficient quantitative evidence exists…
Descriptors: Classroom Techniques, Nontraditional Education, Adult Education, Formative Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Warschausky, Seth; Van Tubbergen, Marie; Asbell, Shana; Kaufman, Jacqueline; Ayyangar, Rita; Donders, Jacobus – Assessment, 2012
This study examined the psychometric properties of test presentation and response formats that were modified to be accessible with the use of assistive technology (AT). First, the stability of psychometric properties was examined in 60 children, ages 6 to 12, with no significant physical or communicative impairments. Population-specific…
Descriptors: Testing, Assistive Technology, Testing Accommodations, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
van der Linden, Wim J. – Journal of Educational Measurement, 2011
A critical component of test speededness is the distribution of the test taker's total time on the test. A simple set of constraints on the item parameters in the lognormal model for response times is derived that can be used to control the distribution when assembling a new test form. As the constraints are linear in the item parameters, they can…
Descriptors: Test Format, Reaction Time, Test Construction
Pages: 1  |  ...  |  58  |  59  |  60  |  61  |  62  |  63  |  64  |  65  |  66  |  ...  |  209