Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 4 |
Descriptor
Source
Journal of Educational… | 18 |
Author
Clauser, Brian E. | 2 |
Frary, Robert B. | 2 |
Wilson, Mark | 2 |
Andrich, David | 1 |
Briggs, Derek C. | 1 |
Clyman, Stephen G. | 1 |
De Ayala, R. J. | 1 |
Eiting, Mindert H. | 1 |
Gochyyev, Perman | 1 |
Greiff, Samuel | 1 |
Gressard, Risa P. | 1 |
More ▼ |
Publication Type
Journal Articles | 18 |
Reports - Research | 18 |
Speeches/Meeting Papers | 1 |
Education Level
Secondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 1 |
What Works Clearinghouse Rating
Herborn, Katharina; Mustafic, Maida; Greiff, Samuel – Journal of Educational Measurement, 2017
Collaborative problem solving (CPS) assessment is a new academic research field with a number of educational implications. In 2015, the Programme for International Student Assessment (PISA) assessed CPS with a computer-simulated human-agent (H-A) approach that claimed to measure 12 individual CPS skills for the first time. After reviewing the…
Descriptors: Cooperative Learning, Problem Solving, Computer Simulation, Evaluation Methods
Wilson, Mark; Gochyyev, Perman; Scalise, Kathleen – Journal of Educational Measurement, 2017
This article summarizes assessment of cognitive skills through collaborative tasks, using field test results from the Assessment and Teaching of 21st Century Skills (ATC21S) project. This project, sponsored by Cisco, Intel, and Microsoft, aims to help educators around the world enable students with the skills to succeed in future career and…
Descriptors: Cognitive Ability, Thinking Skills, Evaluation Methods, Educational Assessment
Zimmerman, Donald W. – Journal of Educational Measurement, 2009
This study was an investigation of the relation between the reliability of difference scores, considered as a parameter characterizing a population of examinees, and the reliability estimates obtained from random samples from the population. The parameters in familiar equations for the reliability of difference scores were redefined in such a way…
Descriptors: Computer Simulation, Reliability, Population Groups, Scores
Briggs, Derek C.; Wilson, Mark – Journal of Educational Measurement, 2007
An approach called generalizability in item response modeling (GIRM) is introduced in this article. The GIRM approach essentially incorporates the sampling model of generalizability theory (GT) into the scaling model of item response theory (IRT) by making distributional assumptions about the relevant measurement facets. By specifying a random…
Descriptors: Markov Processes, Generalizability Theory, Item Response Theory, Computation

Oshima, Takako C.; Miller, M. David – Journal of Educational Measurement, 1990
A bidimensional 2-parameter logistic model was applied to data generated for 2 groups on a 40-item test. Item parameters were the same across groups; correlation across the 2 traits varied. Results indicate the need for caution in using item-response theory (IRT)-based invariance indexes with multidimensional data for these groups. (TJH)
Descriptors: Computer Simulation, Correlation, Discriminant Analysis, Item Response Theory

Wilcox, Rand R. – Journal of Educational Measurement, 1987
Four procedures are discussed for obtaining a confidence interval when answer-until-correct scoring is used in multiple choice tests. Simulated data show that the choice of procedure depends upon sample size. (GDC)
Descriptors: Computer Simulation, Multiple Choice Tests, Sample Size, Scoring

Frary, Robert B. – Journal of Educational Measurement, 1989
Responses to a 50-item, 4-choice test were simulated for 1,000 examinees under conventional formula-scoring instructions. Based on 192 simulation runs, formula scores and expected formula scores were determined for each examinee allowing and not allowing for inappropriate omissions. (TJH)
Descriptors: Computer Simulation, Difficulty Level, Guessing (Tests), Multiple Choice Tests

Andrich, David – Journal of Educational Measurement, 1989
The distinction between deterministic and statistical reasoning in the application of models to educational measurement is explicated. Issues addressed include the relationship between data and estimation equations, distinction between parameters and parameter estimates, and power of tests of fit of responses across the ability continuum. (TJH)
Descriptors: Computer Simulation, Equations (Mathematics), Estimation (Mathematics), Goodness of Fit

Clauser, Brian E.; Harik, Polina; Clyman, Stephen G. – Journal of Educational Measurement, 2000
Used generalizability theory to assess the impact of using independent, randomly equivalent groups of experts to develop scoring algorithms for computer simulation tasks designed to measure physicians' patient management skills. Results with three groups of four medical school faculty members each suggest that the impact of the expert group may be…
Descriptors: Computer Simulation, Generalizability Theory, Performance Based Assessment, Physicians

Nandakumar, Ratna – Journal of Educational Measurement, 1991
A statistical method, W. F. Stout's statistical test of essential unidimensionality (1990), for exploring the lack of unidimensionality in test data was studied using Monte Carlo simulations. The statistical procedure is a hypothesis test of whether the essential dimensionality is one or exceeds one, regardless of the traditional dimensionality.…
Descriptors: Ability, Achievement Tests, Computer Simulation, Equations (Mathematics)

Gressard, Risa P.; Loyd, Brenda H. – Journal of Educational Measurement, 1991
A Monte Carlo study, which simulated 10,000 examinees' responses to four tests, investigated the effect of item stratification on parameter estimation in multiple matrix sampling of achievement data. Practical multiple matrix sampling is based on item stratification by item discrimination and a sampling plan with moderate number of subtests. (SLD)
Descriptors: Achievement Tests, Comparative Testing, Computer Simulation, Estimation (Mathematics)

van den Bergh, Huub; Eiting, Mindert H. – Journal of Educational Measurement, 1989
A method of assessing rater reliability via a design of overlapping rater teams is presented. Covariances or correlations of ratings can be analyzed with LISREL models. Models in which the rater reliabilities are congeneric, tau-equivalent, or parallel can be tested. Two examples based on essay ratings are presented. (TJH)
Descriptors: Analysis of Covariance, Computer Simulation, Correlation, Elementary Secondary Education

Frary, Robert B. – Journal of Educational Measurement, 1985
Responses to a sample test were simulated for examinees under free-response and multiple-choice formats. Test score sets were correlated with randomly generated sets of unit-normal measures. The extent of superiority of free response tests was sufficiently small so that other considerations might justifiably dictate format choice. (Author/DWH)
Descriptors: Comparative Analysis, Computer Simulation, Essay Tests, Guessing (Tests)

De Ayala, R. J.; And Others – Journal of Educational Measurement, 1990
F. M. Lord's flexilevel, computerized adaptive testing (CAT) procedure was compared to an item-response theory-based CAT procedure that uses Bayesian ability estimation with various standard errors of estimates used for terminating the test. Ability estimates of flexilevel CATs were as accurate as were those of Bayesian CATs. (TJH)
Descriptors: Ability Identification, Adaptive Testing, Bayesian Statistics, Comparative Analysis

Plake, Barbara S.; Kane, Michael T. – Journal of Educational Measurement, 1991
Several methods for determining a passing score on an examination from individual raters' estimates of minimal pass levels were compared through simulation. The methods used differed in the weighting estimates for each item received in the aggregation process. Reasons why the simplest procedure is most preferred are discussed. (SLD)
Descriptors: Comparative Analysis, Computer Simulation, Cutting Scores, Estimation (Mathematics)
Previous Page | Next Page »
Pages: 1 | 2