NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1,516 to 1,530 of 9,552 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Bildiren, Ahmet; Bikmaz Bilgen, Özge; Korkmaz, Mediha – SAGE Open, 2021
The aim of the present study is to develop a national non-verbal cognitive ability test in Turkey. Test items were developed during the first stage and applied as a pilot study on 3,073 children in the age interval of 4 to 13. The test was given its final form based on the values of item difficulty, item distinctiveness, item total score…
Descriptors: National Competency Tests, Nonverbal Tests, Cognitive Ability, Cognitive Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Alnasraween, Moen Salman; Almughrabi, Ayat Mohammad; Ammari, Raeda Mofid; Alkaramneh, Mohammad Saleh – Cypriot Journal of Educational Sciences, 2021
The purpose of this study is to construct a digital culture test in light of the Item Response Theory and to investigate its psychometric properties. The study sample consisted of six hundred fifty (650) male and female students in the eighth grade from the Directorate of Education and Teaching of Salt District. To obtain the results, the…
Descriptors: Foreign Countries, Technological Literacy, Tests, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Tsaousis, Ioannis; Sideridis, Georgios D.; AlGhamdi, Hannan M. – Journal of Psychoeducational Assessment, 2021
This study evaluated the psychometric quality of a computerized adaptive testing (CAT) version of the general cognitive ability test (GCAT), using a simulation study protocol put forth by Han, K. T. (2018a). For the needs of the analysis, three different sets of items were generated, providing an item pool of 165 items. Before evaluating the…
Descriptors: Computer Assisted Testing, Adaptive Testing, Cognitive Tests, Cognitive Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Ponce, Héctor R.; Mayer, Richard E.; Loyola, María Soledad – Journal of Educational Computing Research, 2021
One of the most common technology-enhanced items used in large-scale K-12 testing programs is the drag-and-drop response interaction. The main research questions in this study are: (a) Does adding a drag-and-drop interface to an online test affect the accuracy of student performance? (b) Does adding a drag-and-drop interface to an online test…
Descriptors: Computer Assisted Testing, Test Construction, Standardized Tests, Elementary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Moulton, Sara E.; Young, Ellie L. – Psychology in the Schools, 2021
This study examined gender differences in the psychometric properties of the Student Risk Screening Scale for Internalizing and Externalizing behaviors (SRSS-IE) using item response theory methods among a sample of 2,122 middle school students. The SRSS-IE is a screening instrument used to identify students who are potentially at risk for…
Descriptors: Screening Tests, Student Behavior, Behavior Problems, Emotional Disturbances
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Cascella, Clelia; Giberti, Chiara; Bolondi, Giorgio – Education Sciences, 2021
This study is aimed at exploring how different formulations of the same mathematical item may influence students' answers, and whether or not boys and girls are equally affected by differences in presentation. An experimental design was employed: the same stem-items (i.e., items with the same mathematical content and question intent) were…
Descriptors: Mathematics Achievement, Mathematics Tests, Achievement Tests, Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Malone, Kathy L.; Boone, William J.; Stammen, Andria; Schuchardt, Anita; Ding, Lin; Sabree, Zakee – EURASIA Journal of Mathematics, Science and Technology Education, 2021
Instruments for assessing secondary students' conceptual understanding of core concepts in biology are needed by educational practitioners and researchers alike. Most instruments available for secondary biology (years 9 to 12) focus only on highly specific biological concepts instead of multiple core concepts. This study describes the development…
Descriptors: Measures (Individuals), Test Construction, Construct Validity, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Lu, Owen H. T.; Huang, Anna Y. Q.; Tsai, Danny C. L.; Yang, Stephen J. H. – Educational Technology & Society, 2021
Human-guided machine learning can improve computing intelligence, and it can accurately assist humans in various tasks. In education research, artificial intelligence (AI) is applicable in many situations, such as predicting students' learning paths and strategies. In this study, we explore the benefits of repetitive practice of short-answer…
Descriptors: Test Items, Artificial Intelligence, Test Construction, Student Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Güler, Mustafa – Journal of Pedagogical Research, 2021
The extent to which the targeted outcomes in education are achieved can be determined by the educational assessment process. Although various alternative ways of assessment have arisen in recent decades, written examinations are still widely used by teachers. This study aims to determine the quality of the questions used by middle school…
Descriptors: Middle School Teachers, Mathematics Teachers, Middle School Mathematics, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Lukácsi, Zoltán – Language Testing, 2021
In second language writing assessment, rating scales and scores from human-mediated assessment have been criticized for a number of shortcomings including problems with adequacy, relevance, and reliability (Hamp-Lyons, 1990; McNamara, 1996; Weigle, 2002). In its testing practice, Euroexam International also detected that the rating scales for…
Descriptors: Test Construction, Test Validity, Test Items, Check Lists
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rintayati, Peduk; Lukitasari, Hafizhah; Syawaludin, Ahmad – International Journal of Instruction, 2021
Assessment of higher-order thinking skills (HOTS) provides few opportunities for students to develop more in-depth knowledge, serving students' ability to identify and solve their problems. One type of instrument for measuring HOTS objectively is the two-tier multiple-choice test (TTMCT). This research is part of the research and development…
Descriptors: Foreign Countries, Elementary School Students, Thinking Skills, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Eze, Emmanuel – Journal of Geography, 2021
The perpetual decline of students' performance in external geography examinations necessitated this enquiry to identify causes of failure reported by examiners. This study, therefore, adopted a qualitative approach to analyze West African Examination Council (WAEC) chief examiners' reports from 2008 to 2018 to extract vital information on why…
Descriptors: Secondary School Students, Geography Instruction, Low Achievement, Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hardcastle, Joseph M.; Herrmann Abell, Cari F.; DeBoer, George E. – Grantee Submission, 2021
We developed assessment tasks aligned to the Next Generation Science Standards (NGSS) that require students to use argumentation and explanation practices along with disciplinary core ideas and crosscutting concepts to make sense of energy-related phenomena. Scoring rubrics were created to evaluate students' ability to make accurate claims, cite…
Descriptors: Academic Standards, Energy, Scientific Concepts, Persuasive Discourse
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zainudin, M.; Subali, Bambang; Jailani – International Journal of Instruction, 2019
Mathematical creativity instrument is a tool to assess students' creative thinking skills in solving mathematical problems. Mathematical creativity has a pivotal role in improving the quality of life, solving problems, making a change, and increasing the efficiency and effectiveness of a system. Unfortunately, there is a gap in the assessment of…
Descriptors: Construct Validity, Creativity Tests, Mathematics, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Kalinowski, Steven T. – Educational and Psychological Measurement, 2019
Item response theory (IRT) is a statistical paradigm for developing educational tests and assessing students. IRT, however, currently lacks an established graphical method for examining model fit for the three-parameter logistic model, the most flexible and popular IRT model in educational testing. A method is presented here to do this. The graph,…
Descriptors: Item Response Theory, Educational Assessment, Goodness of Fit, Probability
Pages: 1  |  ...  |  98  |  99  |  100  |  101  |  102  |  103  |  104  |  105  |  106  |  ...  |  637