Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 5 |
Descriptor
Automation | 5 |
Computer Software | 5 |
Validity | 5 |
Artificial Intelligence | 2 |
Computer Software Evaluation | 2 |
Educational Technology | 2 |
Programming | 2 |
Reliability | 2 |
Access to Information | 1 |
Accuracy | 1 |
Classification | 1 |
More ▼ |
Source
Educational Technology… | 1 |
International Journal of… | 1 |
International Journal of… | 1 |
Journal of Technology,… | 1 |
Society for Research on… | 1 |
Author
Attali, Yigal | 1 |
Brandon Sepulvado | 1 |
Burstein, Jill | 1 |
Choi, Youn-Jeng | 1 |
Cohen, Allan S. | 1 |
Ifenthaler, Dirk | 1 |
Jennifer Hamilton | 1 |
Lee, Sunbok | 1 |
Miller, L. D. | 1 |
Nugent, Gwen | 1 |
Samal, Ashok | 1 |
More ▼ |
Publication Type
Journal Articles | 4 |
Reports - Descriptive | 2 |
Reports - Research | 2 |
Reports - Evaluative | 1 |
Education Level
Higher Education | 2 |
Postsecondary Education | 2 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Lee, Sunbok; Choi, Youn-Jeng; Cohen, Allan S. – International Journal of Assessment Tools in Education, 2018
A simulation study is a useful tool in examining how validly item response theory (IRT) models can be applied in various settings. Typically, a large number of replications are required to obtain the desired precision. However, many standard software packages in IRT, such as MULTILOG and BILOG, are not well suited for a simulation study requiring…
Descriptors: Item Response Theory, Simulation, Replication (Evaluation), Automation
Brandon Sepulvado; Jennifer Hamilton – Society for Research on Educational Effectiveness, 2021
Background: Traditional survey efforts to gather outcome data at scale have significant limitations, including cost, time, and respondent burden. This pilot study explored new and innovative large-scale methods of collecting and validating data from publicly available sources. Taking advantage of emerging data science techniques, we leverage…
Descriptors: Automation, Data Collection, Data Analysis, Validity
Miller, L. D.; Soh, Leen-Kiat; Samal, Ashok; Nugent, Gwen – International Journal of Artificial Intelligence in Education, 2012
Learning objects (LOs) are digital or non-digital entities used for learning, education or training commonly stored in repositories searchable by their associated metadata. Unfortunately, based on the current standards, such metadata is often missing or incorrectly entered making search difficult or impossible. In this paper, we investigate…
Descriptors: Computer Science Education, Metadata, Internet, Artificial Intelligence
Ifenthaler, Dirk – Educational Technology Research and Development, 2010
The demand for good instructional environments presupposes valid and reliable analytical instruments for educational research. This paper introduces the "SMD Technology" (Surface, Matching, Deep Structure), which measures relational, structural, and semantic levels of graphical representations and concept maps. The reliability and validity of the…
Descriptors: Concept Mapping, Educational Research, Semantics, Validity
Attali, Yigal; Burstein, Jill – Journal of Technology, Learning, and Assessment, 2006
E-rater[R] has been used by the Educational Testing Service for automated essay scoring since 1999. This paper describes a new version of e-rater (V.2) that is different from other automated essay scoring systems in several important respects. The main innovations of e-rater V.2 are a small, intuitive, and meaningful set of features used for…
Descriptors: Educational Testing, Test Scoring Machines, Scoring, Writing Evaluation