Publication Date
| In 2026 | 0 |
| Since 2025 | 215 |
| Since 2022 (last 5 years) | 1084 |
| Since 2017 (last 10 years) | 2594 |
| Since 2007 (last 20 years) | 4955 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 653 |
| Teachers | 563 |
| Researchers | 250 |
| Students | 201 |
| Administrators | 81 |
| Policymakers | 22 |
| Parents | 17 |
| Counselors | 8 |
| Community | 7 |
| Support Staff | 3 |
| Media Staff | 1 |
| More ▼ | |
Location
| Turkey | 226 |
| Canada | 223 |
| Australia | 155 |
| Germany | 116 |
| United States | 99 |
| China | 90 |
| Florida | 86 |
| Indonesia | 82 |
| Taiwan | 78 |
| United Kingdom | 73 |
| California | 66 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 1 |
Kavakli, Nurdan; Arslan, Sezen – Online Submission, 2017
Within the scope of educational testing and assessment, setting standards and creating guidelines as a code of practice provide more prolific and sustainable outcomes. In this sense, internationally accepted and regionally accredited principles are suggested for standardization in language testing and assessment practices. Herein, ILTA guidelines…
Descriptors: Foreign Countries, Second Language Instruction, English (Second Language), Language Tests
Martin, Michael O., Ed.; Mullis, Ina V. S., Ed.; Hooper, Martin, Ed. – International Association for the Evaluation of Educational Achievement, 2017
"Methods and Procedures in PIRLS 2016" documents the development of the Progress in International Reading Literacy Study (PIRLS) assessments and questionnaires and describes the methods used in sampling, translation verification, data collection, database construction, and the construction of the achievement and context questionnaire…
Descriptors: Foreign Countries, Achievement Tests, Grade 4, International Assessment
DeBoer, George E.; Quellmalz, Edys S.; Davenport, Jodi L.; Timms, Michael J.; Herrmann-Abell, Cari F.; Buckley, Barbara C.; Jordan, Kevin A.; Huang, Chun-Wei; Flanagan, Jean C. – Journal of Research in Science Teaching, 2014
Online testing holds much promise for assessing students' complex science knowledge and inquiry skills. In the current study, we examined the comparative effectiveness of assessment tasks and test items presented in online modules that used either a static, active, or interactive modality. A total of 1,836 students from the classrooms of 22 middle…
Descriptors: Computer Assisted Testing, Test Items, Interaction, Middle School Students
Wang, Chun – Journal of Educational and Behavioral Statistics, 2014
Many latent traits in social sciences display a hierarchical structure, such as intelligence, cognitive ability, or personality. Usually a second-order factor is linearly related to a group of first-order factors (also called domain abilities in cognitive ability measures), and the first-order factors directly govern the actual item responses.…
Descriptors: Measurement, Accuracy, Item Response Theory, Adaptive Testing
Ding, Lin – Physical Review Special Topics - Physics Education Research, 2014
Discipline-based science concept assessments are powerful tools to measure learners' disciplinary core ideas. Among many such assessments, the Brief Electricity and Magnetism Assessment (BEMA) has been broadly used to gauge student conceptions of key electricity and magnetism (E&M) topics in college-level introductory physics courses.…
Descriptors: Scientific Concepts, Science Tests, College Science, Physics
Webb, Mi-young L.; Lederberg, Amy R. – Journal of Speech, Language, and Hearing Research, 2014
Purpose: This study evaluated psychometric properties of 2 phonological awareness (PA) tests normed for hearing children when used with deaf and hard-of-hearing (DHH) children with functional hearing. It also provides an in-depth description of these children's PA. Method: One hundred and eight DHH children (mean age = 63.3 months) with cochlear…
Descriptors: Phonological Awareness, Deafness, Partial Hearing, Children
Sliter, Katherine A.; Zickar, Michael J. – Educational and Psychological Measurement, 2014
This study compared the functioning of positively and negatively worded personality items using item response theory. In Study 1, word pairs from the Goldberg Adjective Checklist were analyzed using the Graded Response Model. Across subscales, negatively worded items produced comparatively higher difficulty and lower discrimination parameters than…
Descriptors: Item Response Theory, Psychometrics, Personality Measures, Test Items
Dronjic, Vedran; Helms-Park, Rena – Applied Psycholinguistics, 2014
Qian and Schedl's Depth of Vocabulary Knowledge Test was administered to 31 native-speaker undergraduates under an "unconstrained" condition, in which the number of responses to headwords was unfixed, whereas a corresponding group ("n" = 36) completed the test under the original "constrained" condition. Results…
Descriptors: Language Tests, Vocabulary Development, Psycholinguistics, Test Validity
Quirk, Matthew; Rebelez, Jennica; Furlong, Michael – Journal of Psychoeducational Assessment, 2014
This study contributed to the school readiness literature by examining the factor structure and reliability of a revised version of the Kindergarten Student Entrance Profile (KSEP). Teachers rated 579 Latino/a children during the first month of kindergarten using the KSEP. Factor analysis procedures (exploratory factor analysis [EFA] and…
Descriptors: School Readiness, Screening Tests, Hispanic American Students, Kindergarten
Hou, Likun; de la Torre, Jimmy; Nandakumar, Ratna – Journal of Educational Measurement, 2014
Analyzing examinees' responses using cognitive diagnostic models (CDMs) has the advantage of providing diagnostic information. To ensure the validity of the results from these models, differential item functioning (DIF) in CDMs needs to be investigated. In this article, the Wald test is proposed to examine DIF in the context of CDMs. This study…
Descriptors: Test Bias, Models, Simulation, Error Patterns
Daniels, Vijay J.; Bordage, Georges; Gierl, Mark J.; Yudkowsky, Rachel – Advances in Health Sciences Education, 2014
Objective structured clinical examinations (OSCEs) are used worldwide for summative examinations but often lack acceptable reliability. Research has shown that reliability of scores increases if OSCE checklists for medical students include only clinically relevant items. Also, checklists are often missing evidence-based items that high-achieving…
Descriptors: Graduate Medical Education, Check Lists, Scores, Internal Medicine
Han, Kyung T.; Guo, Fanmin – Practical Assessment, Research & Evaluation, 2014
The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…
Descriptors: Maximum Likelihood Statistics, Structural Equation Models, Data, Computer Assisted Testing
Pan, Steven C.; Gopal, Arpita; Rickard, Timothy C. – Journal of Educational Psychology, 2016
Does correctly answering a test question about a multiterm fact enhance memory for the entire fact? We explored that issue in 4 experiments. Subjects first studied Advanced Placement History or Biology facts. Half of those facts were then restudied, whereas the remainder were tested using "5 W" (i.e., "who, what, when, where",…
Descriptors: Undergraduate Students, Testing, Test Items, Memory
Ulu, Mustafa; Akar, Cüneyt – Educational Research and Reviews, 2016
This study aims at determining to what extent visuals contribute to success in non-routine problem solving process and what types of errors are made when solving with visuals. Comparative model was utilized for identifying the effect of visuals on student achievement, and clinical interview technique was used to determine the types of errors. In…
Descriptors: Problem Solving, Elementary School Students, Grade 4, Mathematics
Attali, Yigal; Laitusis, Cara; Stone, Elizabeth – Educational and Psychological Measurement, 2016
There are many reasons to believe that open-ended (OE) and multiple-choice (MC) items elicit different cognitive demands of students. However, empirical evidence that supports this view is lacking. In this study, we investigated the reactions of test takers to an interactive assessment with immediate feedback and answer-revision opportunities for…
Descriptors: Test Items, Questioning Techniques, Differences, Student Reaction

Peer reviewed
Direct link
