Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 7 |
Since 2016 (last 10 years) | 13 |
Since 2006 (last 20 years) | 18 |
Descriptor
Computer Assisted Testing | 22 |
Prediction | 22 |
Test Items | 22 |
Test Construction | 8 |
Difficulty Level | 7 |
Item Analysis | 7 |
Models | 6 |
Scores | 6 |
Adaptive Testing | 5 |
Correlation | 5 |
Simulation | 5 |
More ▼ |
Source
Author
Abayeva, Nella F. | 1 |
Albano, Anthony D. | 1 |
Anita B. Delahay | 1 |
Attali, Yigal | 1 |
Bastianello, Tamara | 1 |
Brondino, Margherita | 1 |
Cai, Liuhan | 1 |
Clevinger, Amanda | 1 |
Crossley, Scott | 1 |
Davey, Tim | 1 |
Feng, Mingyu, Ed. | 1 |
More ▼ |
Publication Type
Journal Articles | 14 |
Reports - Research | 14 |
Reports - Evaluative | 5 |
Speeches/Meeting Papers | 3 |
Dissertations/Theses -… | 2 |
Collected Works - Proceedings | 1 |
Education Level
Audience
Location
Ireland | 1 |
Italy | 1 |
Japan | 1 |
Netherlands | 1 |
Thailand | 1 |
United Kingdom (England) | 1 |
United Kingdom (Northern… | 1 |
United States | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Armed Services Vocational… | 1 |
Flesch Kincaid Grade Level… | 1 |
Graduate Record Examinations | 1 |
National Assessment of… | 1 |
Peabody Picture Vocabulary… | 1 |
Program for the International… | 1 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Qiao, Chen; Hu, Xiao – IEEE Transactions on Learning Technologies, 2023
Free text answers to short questions can reflect students' mastery of concepts and their relationships relevant to learning objectives. However, automating the assessment of free text answers has been challenging due to the complexity of natural language. Existing studies often predict the scores of free text answers in a "black box"…
Descriptors: Computer Assisted Testing, Automation, Test Items, Semantics
Stenger, Rachel; Olson, Kristen; Smyth, Jolene D. – Field Methods, 2023
Questionnaire designers use readability measures to ensure that questions can be understood by the target population. The most common measure is the Flesch-Kincaid Grade level, but other formulas exist. This article compares six different readability measures across 150 questions in a self-administered questionnaire, finding notable variation in…
Descriptors: Readability, Readability Formulas, Computer Assisted Testing, Evaluation Methods
Qunbar, Sa'ed Ali – ProQuest LLC, 2019
This work presents a study that used distributed language representations of test items to model test item difficulty. Distributed language representations are low-dimensional numeric representations of written language inspired and generated by artificial neural network architecture. The research begins with a discussion of the importance of item…
Descriptors: Computer Assisted Testing, Test Items, Difficulty Level, Models
Patel, Nirmal; Sharma, Aditya; Shah, Tirth; Lomas, Derek – Journal of Educational Data Mining, 2021
Process Analysis is an emerging approach to discover meaningful knowledge from temporal educational data. The study presented in this paper shows how we used Process Analysis methods on the National Assessment of Educational Progress (NAEP) test data for modeling and predicting student test-taking behavior. Our process-oriented data exploration…
Descriptors: Learning Analytics, National Competency Tests, Evaluation Methods, Prediction
Mingying Zheng – ProQuest LLC, 2024
The digital transformation in educational assessment has led to the proliferation of large-scale data, offering unprecedented opportunities to enhance language learning, and testing through machine learning (ML) techniques. Drawing on the extensive data generated by online English language assessments, this dissertation investigates the efficacy…
Descriptors: Artificial Intelligence, Computational Linguistics, Language Tests, English (Second Language)
Susu Zhang; Xueying Tang; Qiwei He; Jingchen Liu; Zhiliang Ying – Grantee Submission, 2024
Computerized assessments and interactive simulation tasks are increasingly popular and afford the collection of process data, i.e., an examinee's sequence of actions (e.g., clickstreams, keystrokes) that arises from interactions with each task. Action sequence data contain rich information on the problem-solving process but are in a nonstandard,…
Descriptors: Correlation, Problem Solving, Computer Assisted Testing, Prediction
Albano, Anthony D.; Cai, Liuhan; Lease, Erin M.; McConnell, Scott R. – Journal of Educational Measurement, 2019
Studies have shown that item difficulty can vary significantly based on the context of an item within a test form. In particular, item position may be associated with practice and fatigue effects that influence item parameter estimation. The purpose of this research was to examine the relevance of item position specifically for assessments used in…
Descriptors: Test Items, Computer Assisted Testing, Item Analysis, Difficulty Level
Bastianello, Tamara; Brondino, Margherita; Persici, Valentina; Majorano, Marinella – Journal of Research in Childhood Education, 2023
The present contribution aims at presenting an assessment tool (i.e., the TALK-assessment) built to evaluate the language development and school readiness of Italian preschoolers before they enter primary school, and its predictive validity for the children's reading and writing skills at the end of the first year of primary school. The early…
Descriptors: Literacy, Computer Assisted Testing, Italian, Language Acquisition
Anita B. Delahay; Marsha C. Lovett – Grantee Submission, 2018
Empirically supported multimedia learning (MML) principles [1] suggest effective ways to design instruction, generally for elements on the order of a graphic or an activity. We examined whether the positive impact of MML could be detected in larger instructional units from a MOOC. We coded instructional design (ID) features corresponding to MML…
Descriptors: Multimedia Instruction, Instructional Design, MOOCs, Computer Assisted Testing
Ganzfried, Sam; Yusuf, Farzana – Education Sciences, 2018
A problem faced by many instructors is that of designing exams that accurately assess the abilities of the students. Typically, these exams are prepared several days in advance, and generic question scores are used based on rough approximation of the question difficulty and length. For example, for a recent class taught by the author, there were…
Descriptors: Weighted Scores, Test Construction, Student Evaluation, Multiple Choice Tests
Golovachyova, Viktoriya N.; Menlibekova, Gulbakhyt Zh.; Abayeva, Nella F.; Ten, Tatyana L.; Kogaya, Galina D. – International Journal of Environmental and Science Education, 2016
Using computer-based monitoring systems that rely on tests could be the most effective way of knowledge evaluation. The problem of objective knowledge assessment by means of testing takes on a new dimension in the context of new paradigms in education. The analysis of the existing test methods enabled us to conclude that tests with selected…
Descriptors: Expertise, Computer Assisted Testing, Student Evaluation, Knowledge Level
Shermis, Mark D.; Mao, Liyang; Mulholland, Matthew; Kieftenbeld, Vincent – International Journal of Testing, 2017
This study uses the feature sets employed by two automated scoring engines to determine if a "linguistic profile" could be formulated that would help identify items that are likely to exhibit differential item functioning (DIF) based on linguistic features. Sixteen items were administered to 1200 students where demographic information…
Descriptors: Computer Assisted Testing, Scoring, Hypothesis Testing, Essays
Hauser, Carl; Thum, Yeow Meng; He, Wei; Ma, Lingling – Educational and Psychological Measurement, 2015
When conducting item reviews, analysts evaluate an array of statistical and graphical information to assess the fit of a field test (FT) item to an item response theory model. The process can be tedious, particularly when the number of human reviews (HR) to be completed is large. Furthermore, such a process leads to decisions that are susceptible…
Descriptors: Test Items, Item Response Theory, Research Methodology, Decision Making
Attali, Yigal – ETS Research Report Series, 2014
Previous research on calculator use in standardized assessments of quantitative ability focused on the effect of calculator availability on item difficulty and on whether test developers can predict these effects. With the introduction of an on-screen calculator on the Quantitative Reasoning measure of the "GRE"® revised General Test, it…
Descriptors: College Entrance Examinations, Graduate Study, Calculators, Test Items
Khunkrai, Naruemon; Sawangboon, Tatsirin; Ketchatturat, Jatuphum – Educational Research and Reviews, 2015
The aim of this research is to study the accurate prediction of comparing test information and evaluation result by multidimensional computerized adaptive scholastic aptitude test program used for grade 9 students under different reviewing test conditions. Grade 9 students of the Secondary Educational Service Area Office in the North-east of…
Descriptors: Foreign Countries, Secondary School Students, Grade 9, Computer Assisted Testing
Previous Page | Next Page »
Pages: 1 | 2