NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 3,796 to 3,810 of 9,530 results Save | Export
OECD Publishing, 2013
The Programme for the International Assessment of Adult Competencies (PIAAC) has been planned as an ongoing program of assessment. The first cycle of the assessment has involved two "rounds." The first round, which is covered by this report, took place over the period of January 2008-October 2013. The main features of the first cycle of…
Descriptors: International Assessment, Adults, Skills, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Soureshjani, Kamal Heidari – Language Testing in Asia, 2011
Among several factors affecting the performance of testees on a test is the sequence of the test items. The present study served as an attempt to shed light on the effect of test item sequence on Iranian EFL learners' performance on a test of grammar. To achieve such a purpose, 70 language learners of English at Pooyesh Institute in Shiraz (the…
Descriptors: Test Format, Test Items, Difficulty Level, Grammar
Educational Testing Service, 2011
Choosing whether to test via computer is the most difficult and consequential decision the designers of a testing program can make. The decision is difficult because of the wide range of choices available. Designers can choose where and how often the test is made available, how the test items look and function, how those items are combined into…
Descriptors: Test Items, Testing Programs, Testing, Computer Assisted Testing
Keiffer, Elizabeth Ann – ProQuest LLC, 2011
A differential item functioning (DIF) simulation study was conducted to explore the type and level of impact that contamination had on type I error and power rates in DIF analyses when the suspect item favored the same or opposite group as the DIF items in the matching subtest. Type I error and power rates were displayed separately for the…
Descriptors: Test Items, Sample Size, Simulation, Identification
Abedlaziz, Nabeel; Ismail, Wail; Hussin, Zaharah – Online Submission, 2011
Test items are designed to provide information about the examinees. Difficult items are designed to be more demanding and easy items are less so. However, sometimes, test items carry with their demands other than those intended by the test developer (Scheuneman & Gerritz, 1990). When personal attributes such as gender systematically affect…
Descriptors: Test Bias, Test Items, Difficulty Level, Gender Differences
Peer reviewed Peer reviewed
Direct linkDirect link
Fisher, Kathleen M.; Williams, Kathy S.; Lineback, Jennifer Evarts – CBE - Life Sciences Education, 2011
Biology student mastery regarding the mechanisms of diffusion and osmosis is difficult to achieve. To monitor comprehension of these processes among students at a large public university, we developed and validated an 18-item Osmosis and Diffusion Conceptual Assessment (ODCA). This assessment includes two-tiered items, some adopted or modified…
Descriptors: Test Items, Diagnostic Tests, Biology, Scientific Concepts
Peer reviewed Peer reviewed
Direct linkDirect link
Luo, Fenqjen; Lo, Jane-Jane; Leu, Yuh-Chyn – School Science and Mathematics, 2011
The purpose of this paper is to show the similarities as well as the differences of fundamental fraction knowledge owned by preservice elementary teachers from the United States (N = 89) and Taiwan (N = 85). To this end, we examined and compared their performance on an instrument including 15 multiple-choice test items. The items were categorized…
Descriptors: Preservice Teacher Education, Preservice Teachers, Test Items, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Alessandri, Guido; Vecchione, Michele; Tisak, John; Barbaranelli, Claudio – Multivariate Behavioral Research, 2011
When a self-report instrument includes a balanced number of positively and negatively worded items, factor analysts often use method factors to aid model fitting. The nature of these factors, often referred to as acquiescence, is still debated. Relying upon previous results (Alessandri et al., 2010; DiStefano & Motl, 2006, 2008; Rauch, Schweizer,…
Descriptors: Evidence, Construct Validity, Validity, Personality
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Ou Lydia; Lee, Hee-Sun; Linn, Marcia C. – Educational Assessment, 2011
Both multiple-choice and constructed-response items have known advantages and disadvantages in measuring scientific inquiry. In this article we explore the function of explanation multiple-choice (EMC) items and examine how EMC items differ from traditional multiple-choice and constructed-response items in measuring scientific reasoning. A group…
Descriptors: Science Tests, Multiple Choice Tests, Responses, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Leighton, Jacqueline P.; Heffernan, Colleen; Cor, M. Kenneth; Gokiert, Rebecca J.; Cui, Ying – Applied Measurement in Education, 2011
The "Standards for Educational and Psychological Testing" indicate that test instructions, and by extension item objectives, presented to examinees should be sufficiently clear and detailed to help ensure that they respond as developers intend them to respond (Standard 3.20; AERA, APA, & NCME, 1999). The present study investigates…
Descriptors: Test Construction, Validity, Evidence, Science Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Winke, Paula – Language Assessment Quarterly, 2011
In this study, I investigated the reliability of the U.S. Naturalization Test's civics component by asking 414 individuals to take a mock U.S. citizenship test comprising civics test questions. Using an incomplete block design of six forms with 16 nonoverlapping items and four anchor items on each form (the anchors connected the six subsets of…
Descriptors: Test Items, Citizenship, Civics, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
White, Harold B. – Biochemistry and Molecular Biology Education, 2011
The author and other teaching faculty take pride in their ability to write creative and challenging examination questions. Their self-assessment is based on experience and their knowledge of their subject and discipline. Although their judgment may be correct, it is done usually in the absence of deep knowledge of what is known about the…
Descriptors: Test Items, Community Colleges, Molecular Biology, College Faculty
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, Holmes – Applied Psychological Measurement, 2011
Estimation of multidimensional item response theory (MIRT) model parameters can be carried out using the normal ogive with unweighted least squares estimation with the normal-ogive harmonic analysis robust method (NOHARM) software. Previous simulation research has demonstrated that this approach does yield accurate and efficient estimates of item…
Descriptors: Item Response Theory, Computation, Test Items, Simulation
Maria Assunta Hardy – ProQuest LLC, 2011
Guidelines to screen and select common items for vertical scaling have been adopted from equating. Differences between vertical scaling and equating suggest that these guidelines may not apply to vertical scaling in the same way that they apply to equating. For example, in equating the examinee groups are assumed to be randomly equivalent, but in…
Descriptors: Elementary School Mathematics, Mathematics Tests, Test Construction, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Cihangir-Cankaya, Zeynep – Educational Sciences: Theory and Practice, 2012
There are two main objectives of this study: The first is to reconsider the Listening Skill Scale and the second is to compare the levels of students of counseling and guidance according to the situations of whether they took the courses including the listening skills and to gender variable. In accordance with these objectives, the data obtained…
Descriptors: Measures (Individuals), Psychology, Guidance, Listening Skills
Pages: 1  |  ...  |  250  |  251  |  252  |  253  |  254  |  255  |  256  |  257  |  258  |  ...  |  636