NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1,111 to 1,125 of 9,530 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Villarreal, Victor – Journal of Psychoeducational Assessment, 2019
The "Rating Scale of Impairment" (RSI; Goldstein & Naglieri, 2016b) is a norm-referenced measure of functional impairment. The RSI measures impairment in six domains, as well as overall impairment, based in part on the International Classification of Functioning, Disability, and Health. Functional impairment, as defined by the ICF…
Descriptors: Rating Scales, Norm Referenced Tests, Disabilities, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Han, Kyung T.; Dimitrov, Dimiter M.; Al-Mashary, Faisal – Educational and Psychological Measurement, 2019
The "D"-scoring method for scoring and equating tests with binary items proposed by Dimitrov offers some of the advantages of item response theory, such as item-level difficulty information and score computation that reflects the item difficulties, while retaining the merits of classical test theory such as the simplicity of number…
Descriptors: Test Construction, Scoring, Test Items, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Seonghoon; Kolen, Michael J. – Applied Measurement in Education, 2019
In applications of item response theory (IRT), fixed parameter calibration (FPC) has been used to estimate the item parameters of a new test form on the existing ability scale of an item pool. The present paper presents an application of FPC to multiple examinee groups test data that are linked to the item pool via anchor items, and investigates…
Descriptors: Item Response Theory, Item Banks, Test Items, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Rigney, Alexander M. – Journal of Psychoeducational Assessment, 2019
The "Detroit Tests of Learning Aptitude" has been in use for more than three quarters of a century (Baker & Leland, 1935). Its longevity in the field speaks to its popularity as a broad measure of cognitive abilities. Its most recent iteration, in the form of the "Detroit Tests of Learning Abilities--Fifth Edition" (DTLA-5;…
Descriptors: Aptitude Tests, Cognitive Ability, Test Construction, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tulek, Onder Kamil; Kose, Ibrahim Alper – Eurasian Journal of Educational Research, 2019
Purpose: This research investigates Tests that include DIF items and which are purified from DIF items. While doing this, the ability estimations and purified DIF items are compared to understand whether there is a correlation between the estimations. Method: The researcher used to R 3.4.1 in order to compare the items and after this situation;…
Descriptors: Test Items, Item Analysis, Item Response Theory, Test Length
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wang, Lin – ETS Research Report Series, 2019
Rearranging response options in different versions of a test of multiple-choice items can be an effective strategy against cheating on the test. This study investigated if rearranging response options would affect item performance and test score comparability. A study test was assembled as the base version from which 3 variant versions were…
Descriptors: Multiple Choice Tests, Test Items, Test Format, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Ip, Edward H.; Strachan, Tyler; Fu, Yanyan; Lay, Alexandra; Willse, John T.; Chen, Shyh-Huei; Rutkowski, Leslie; Ackerman, Terry – Journal of Educational Measurement, 2019
Test items must often be broad in scope to be ecologically valid. It is therefore almost inevitable that secondary dimensions are introduced into a test during test development. A cognitive test may require one or more abilities besides the primary ability to correctly respond to an item, in which case a unidimensional test score overestimates the…
Descriptors: Test Items, Test Bias, Test Construction, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Leo, J.; Kurdi, G.; Matentzoglu, N.; Parsia, B.; Sattler, U.; Forge, S.; Donato, G.; Dowling, W. – International Journal of Artificial Intelligence in Education, 2019
Designing good multiple choice questions (MCQs) for education and assessment is time consuming and error-prone. An abundance of structured and semi-structured data has led to the development of automatic MCQ generation methods. Recently, ontologies have emerged as powerful tools to enable the automatic generation of MCQs. However, current question…
Descriptors: Multiple Choice Tests, Test Items, Automation, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Yi-Hsuan; Haberman, Shelby J.; Dorans, Neil J. – Journal of Educational Measurement, 2019
In many educational tests, both multiple-choice (MC) and constructed-response (CR) sections are used to measure different constructs. In many common cases, security concerns lead to the use of form-specific CR items that cannot be used for equating test scores, along with MC sections that can be linked to previous test forms via common items. In…
Descriptors: Scores, Multiple Choice Tests, Test Items, Responses
Peer reviewed Peer reviewed
Direct linkDirect link
Vicente, Eva; Guillén, Verónica Marina; Gómez, Laura Elisabet; Ibáñez, Alba; Sánchez, Sergio – Journal of Applied Research in Intellectual Disabilities, 2019
Background: Advances in international studies on self-determination point out the need for continuous efforts to deepen its understanding and implications. The aim of this study is to obtain a comprehensive pool of items to operationalize the self-determination construct that serves as a starting point towards a valid instrument based on the…
Descriptors: Stakeholders, Self Determination, Intellectual Disability, Professional Personnel
OECD Publishing, 2019
Log files from computer-based assessment can help better understand respondents' behaviours and cognitive strategies. Analysis of timing information from Programme for the International Assessment of Adult Competencies (PIAAC) reveals large differences in the time participants take to answer assessment items, as well as large country differences…
Descriptors: Adults, Computer Assisted Testing, Test Items, Reaction Time
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wang, Chun; Zhang, Xue – Grantee Submission, 2019
The relations among alternative parameterizations of the binary factor analysis (FA) model and two-parameter logistic (2PL) item response theory (IRT) model have been thoroughly discussed in literature (e.g., Lord & Novick, 1968; Takane & de Leeuw, 1987; McDonald, 1999; Wirth & Edwards, 2007; Kamata & Bauer, 2008). However, the…
Descriptors: Test Items, Error of Measurement, Item Response Theory, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Jewsbury, Paul A.; van Rijn, Peter W. – Journal of Educational and Behavioral Statistics, 2020
In large-scale educational assessment data consistent with a simple-structure multidimensional item response theory (MIRT) model, where every item measures only one latent variable, separate unidimensional item response theory (UIRT) models for each latent variable are often calibrated for practical reasons. While this approach can be valid for…
Descriptors: Item Response Theory, Computation, Test Items, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Mandracchia, Nina R.; Sims, Wesley A. – Computers in the Schools, 2020
As technology use continues to rapidly increase, so too does consumer use of web-based resources. While important, accessibility is often overemphasized by users when consuming and evaluating web resources. This prioritization may have particularly negative consequences for the selection of supports or interventions in educational settings. This…
Descriptors: Internet, Resources, Selection, Rating Scales
Sinharay, Sandip; van Rijn, Peter W. – Journal of Educational and Behavioral Statistics, 2020
Response time models (RTMs) are of increasing interest in educational and psychological testing. This article focuses on the lognormal model for response times, which is one of the most popular RTMs. Several existing statistics for testing normality and the fit of factor analysis models are repurposed for testing the fit of the lognormal model. A…
Descriptors: Educational Testing, Psychological Testing, Goodness of Fit, Factor Analysis
Pages: 1  |  ...  |  71  |  72  |  73  |  74  |  75  |  76  |  77  |  78  |  79  |  ...  |  636