NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ganzfried, Sam; Yusuf, Farzana – Education Sciences, 2018
A problem faced by many instructors is that of designing exams that accurately assess the abilities of the students. Typically, these exams are prepared several days in advance, and generic question scores are used based on rough approximation of the question difficulty and length. For example, for a recent class taught by the author, there were…
Descriptors: Weighted Scores, Test Construction, Student Evaluation, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Bulut, Okan; Lei, Ming; Guo, Qi – International Journal of Research & Method in Education, 2018
Item positions in educational assessments are often randomized across students to prevent cheating. However, if altering item positions results in any significant impact on students' performance, it may threaten the validity of test scores. Two widely used approaches for detecting position effects -- logistic regression and hierarchical…
Descriptors: Alternative Assessment, Disabilities, Computer Assisted Testing, Structural Equation Models
Peer reviewed Peer reviewed
Direct linkDirect link
Coster, Wendy J.; Kramer, Jessica M.; Tian, Feng; Dooley, Meghan; Liljenquist, Kendra; Kao, Ying-Chia; Ni, Pengsheng – Autism: The International Journal of Research and Practice, 2016
The Pediatric Evaluation of Disability Inventory-Computer Adaptive Test is an alternative method for describing the adaptive function of children and youth with disabilities using a computer-administered assessment. This study evaluated the performance of the Pediatric Evaluation of Disability Inventory-Computer Adaptive Test with a national…
Descriptors: Autism, Pervasive Developmental Disorders, Computer Assisted Testing, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Ihme, Jan Marten; Senkbeil, Martin; Goldhammer, Frank; Gerick, Julia – European Educational Research Journal, 2017
The combination of different item formats is found quite often in large scale assessments, and analyses on the dimensionality often indicate multi-dimensionality of tests regarding the task format. In ICILS 2013, three different item types (information-based response tasks, simulation tasks, and authoring tasks) were used to measure computer and…
Descriptors: Foreign Countries, Computer Literacy, Information Literacy, International Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Hauser, Carl; Thum, Yeow Meng; He, Wei; Ma, Lingling – Educational and Psychological Measurement, 2015
When conducting item reviews, analysts evaluate an array of statistical and graphical information to assess the fit of a field test (FT) item to an item response theory model. The process can be tedious, particularly when the number of human reviews (HR) to be completed is large. Furthermore, such a process leads to decisions that are susceptible…
Descriptors: Test Items, Item Response Theory, Research Methodology, Decision Making
Thompson, Carrie A. – ProQuest LLC, 2013
The Missionary Training Center (MTC), affiliated with the Church of Jesus Christ of Latter-day Saints, needs a reliable and cost effective way to measure the oral language proficiency of missionaries learning Spanish. The MTC needed to measure incoming missionaries' Spanish language proficiency for training and classroom assignment as well as to…
Descriptors: Religious Cultural Groups, Second Language Learning, Second Language Instruction, Interviews
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zhang, Mo; Breyer, F. Jay; Lorenz, Florian – ETS Research Report Series, 2013
In this research, we investigated the suitability of implementing "e-rater"® automated essay scoring in a high-stakes large-scale English language testing program. We examined the effectiveness of generic scoring and 2 variants of prompt-based scoring approaches. Effectiveness was evaluated on a number of dimensions, including agreement…
Descriptors: Computer Assisted Testing, Computer Software, Scoring, Language Tests
Schnipke, Deborah L.; Pashley, Peter J. – 1997
Differences in test performance on time-limited tests may be due in part to differential response-time rates between subgroups, rather than real differences in the knowledge, skills, or developed abilities of interest. With computer-administered tests, response times are available and may be used to address this issue. This study investigates…
Descriptors: Computer Assisted Testing, Data Analysis, English, High Stakes Tests
Bejar, Isaac I. – 1986
This paper considers the feasibility of incorporating research results from cognitive science into the modeling of performance on psychometric tests and the construction of test items. The paper focuses on the feasibility of modeling performance on a three-dimensional rotation task within the context of Item Response Theory (IRT). To test the…
Descriptors: Cognitive Measurement, Cognitive Tests, Computer Assisted Testing, Difficulty Level
Peer reviewed Peer reviewed
Tait, K.; Hughes, I. E. – Computers and Education, 1984
Describes a computerized system which presents objective test items to students, and upon question completion, provides immediate feedback with correct answers and explanations. Use of the system by pharmacology students and an analysis of end-of-year examination results are presented. (Author/MBR)
Descriptors: Computer Assisted Instruction, Computer Assisted Testing, Computer Software, Data Analysis