NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 17 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Yongze Xu – Educational and Psychological Measurement, 2024
The questionnaire method has always been an important research method in psychology. The increasing prevalence of multidimensional trait measures in psychological research has led researchers to use longer questionnaires. However, questionnaires that are too long will inevitably reduce the quality of the completed questionnaires and the efficiency…
Descriptors: Item Response Theory, Questionnaires, Generalization, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
LaVoie, Noelle; Parker, James; Legree, Peter J.; Ardison, Sharon; Kilcullen, Robert N. – Educational and Psychological Measurement, 2020
Automated scoring based on Latent Semantic Analysis (LSA) has been successfully used to score essays and constrained short answer responses. Scoring tests that capture open-ended, short answer responses poses some challenges for machine learning approaches. We used LSA techniques to score short answer responses to the Consequences Test, a measure…
Descriptors: Semantics, Evaluators, Essays, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Park, Ryoungsun; Kim, Jiseon; Chung, Hyewon; Dodd, Barbara G. – Educational and Psychological Measurement, 2017
The current study proposes novel methods to predict multistage testing (MST) performance without conducting simulations. This method, called MST test information, is based on analytic derivation of standard errors of ability estimates across theta levels. We compared standard errors derived analytically to the simulation results to demonstrate the…
Descriptors: Testing, Performance, Prediction, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Medhanie, Amanuel G.; Dupuis, Danielle N.; LeBeau, Brandon; Harwell, Michael R.; Post, Thomas R. – Educational and Psychological Measurement, 2012
The first college mathematics course a student enrolls in is often affected by performance on a college mathematics placement test. Yet validity evidence of mathematics placement tests remains limited, even for nationally standardized placement tests, and when it is available usually consists of examining a student's subsequent performance in…
Descriptors: College Mathematics, Student Placement, Mathematics Tests, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Davison, Mark L.; Semmes, Robert; Huang, Lan; Close, Catherine N. – Educational and Psychological Measurement, 2012
Data from 181 college students were used to assess whether math reasoning item response times in computerized testing can provide valid and reliable measures of a speed dimension. The alternate forms reliability of the speed dimension was .85. A two-dimensional structural equation model suggests that the speed dimension is related to the accuracy…
Descriptors: Computer Assisted Testing, Reaction Time, Reliability, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Ng, Kok-Mun; Wang, Chuang; Kim, Do-Hong; Bodenhorn, Nancy – Educational and Psychological Measurement, 2010
The authors investigated the factor structure of the Schutte Self-Report Emotional Intelligence (SSREI) scale on international students. Via confirmatory factor analysis, the authors tested the fit of the models reported by Schutte et al. and five other studies to data from 640 international students in the United States. Results show that…
Descriptors: Emotional Intelligence, Factor Structure, Measures (Individuals), Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Yi-Hsuan; Ip, Edward H.; Fuh, Cheng-Der – Educational and Psychological Measurement, 2008
Although computerized adaptive tests have enjoyed tremendous growth, solutions for important problems remain unavailable. One problem is the control of item exposure rate. Because adaptive algorithms are designed to select optimal items, they choose items with high discriminating power. Thus, these items are selected more often than others,…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Items, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Arce-Ferrer, Alvaro J.; Guzman, Elvira Martinez – Educational and Psychological Measurement, 2009
This study investigates the effect of mode of administration of the Raven Standard Progressive Matrices test on distribution, accuracy, and meaning of raw scores. A random sample of high school students take counterbalanced paper-and-pencil and computer-based administrations of the test and answer a questionnaire surveying preferences for…
Descriptors: Factor Analysis, Raw Scores, Statistical Analysis, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Do-Hong; Huynh, Huynh – Educational and Psychological Measurement, 2008
The current study compared student performance between paper-and-pencil testing (PPT) and computer-based testing (CBT) on a large-scale statewide end-of-course English examination. Analyses were conducted at both the item and test levels. The overall results suggest that scores obtained from PPT and CBT were comparable. However, at the content…
Descriptors: Reading Comprehension, Computer Assisted Testing, Factor Analysis, Comparative Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Mueller, Karsten; Liebig, Christian; Hattrup, Keith – Educational and Psychological Measurement, 2007
Two quasi-experimental field studies were conducted to evaluate the psychometric equivalence of computerized and paper-and-pencil job satisfaction measures. The present research extends previous work in the area by providing better control of common threats to validity in quasi-experimental research on test mode effects and by evaluating a more…
Descriptors: Psychometrics, Field Studies, Job Satisfaction, Computer Assisted Testing
Peer reviewed Peer reviewed
Huba, G. J. – Educational and Psychological Measurement, 1986
The runs test for random sequences of responding is proposed for application in long inventories with dichotomous items as an index of sterotyped responding. This index is useful for detecting whether the client shifts between response alternatives more or less frequently than would be expected by chance. (LMO)
Descriptors: Computer Assisted Testing, Personality Measures, Response Style (Tests), Scoring
Peer reviewed Peer reviewed
Krus, David J.; Ceurvorst, Robert W. – Educational and Psychological Measurement, 1978
An algorithm for updating the means of variances of a norm group after each computer-assisted administration of a test is described. The algorithm does not require storage of the whole data set, and provides for unlimited, continuous expansion of the test norms. (Author)
Descriptors: Computer Assisted Testing, Computer Programs, Norms, Statistical Data
Peer reviewed Peer reviewed
Brooks, Sarah; Hartz, Mary A. – Educational and Psychological Measurement, 1978
The predictive ability of a mathematics test organized into a branching test for computer-interactive administration was investigated. Twenty-five blocks of five items were used in the branching. Each testee took 25 items, with each subsequent block being determined by prior performance. Results supported the branching technique. (JKS)
Descriptors: Achievement Tests, Branching, College Mathematics, Computer Assisted Testing
Peer reviewed Peer reviewed
Willoughby, T. Lee; And Others – Educational and Psychological Measurement, 1977
The concurrent validity of a computer assisted test construction system designed to measure information in the medical sciences was assessed. Results support the usefulness of the system. (Author/JKS)
Descriptors: Computer Assisted Instruction, Computer Assisted Testing, Formative Evaluation, Higher Education
Peer reviewed Peer reviewed
Direct linkDirect link
Scherbaum, Charles A.; Cohen-Charash, Yochi; Kern, Michael J. – Educational and Psychological Measurement, 2006
General self-efficacy (GSE), individuals' belief in their ability to perform well in a variety of situations, has been the subject of increasing research attention. However, the psychometric properties (e.g., reliability, validity) associated with the scores on GSE measures have been criticized, which has hindered efforts to further establish the…
Descriptors: Self Efficacy, Measures (Individuals), Psychometrics, Reliability
Previous Page | Next Page ยป
Pages: 1  |  2