NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 526 to 540 of 9,533 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Sakworawich, Arnond; Wainer, Howard – Journal of Educational and Behavioral Statistics, 2020
Test scoring models vary in their generality, some even adjust for examinees answering multiple-choice items correctly by accident (guessing), but no models, that we are aware of, automatically adjust an examinee's score when there is internal evidence of cheating. In this study, we use a combination of jackknife technology with an adaptive robust…
Descriptors: Scoring, Cheating, Test Items, Licensing Examinations (Professions)
Peer reviewed Peer reviewed
Direct linkDirect link
Fournier, Geneviève; Lachance, Lise; Viviers, Simon; Lahrizi, Imane Zineb; Goyer, Liette; Masdonati, Jonas – International Journal for Educational and Vocational Guidance, 2020
The paper presents first the theoretical foundations used to develop a pre-experimental version of a questionnaire on relationship to work, and then the four stages of its initial validation leading to an experimental version. These stages included: (1) Defining the dimensions and sub-dimensions of the relationship to work concept; (2)…
Descriptors: Test Construction, Content Validity, Work Attitudes, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L. – Applied Measurement in Education, 2020
In achievement testing there is typically a practical requirement that the set of items administered should be representative of some target content domain. This is accomplished by establishing test blueprints specifying the content constraints to be followed when selecting the items for a test. Sometimes, however, students give disengaged…
Descriptors: Test Items, Test Content, Achievement Tests, Guessing (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Stella Yun; Lee, Won-Chan – Applied Measurement in Education, 2023
This study evaluates various scoring methods including number-correct scoring, IRT theta scoring, and hybrid scoring in terms of scale-score stability over time. A simulation study was conducted to examine the relative performance of five scoring methods in terms of preserving the first two moments of scale scores for a population in a chain of…
Descriptors: Scoring, Comparative Analysis, Item Response Theory, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Gruss, Richard; Clemons, Josh – Journal of Computer Assisted Learning, 2023
Background: The sudden growth in online instruction due to COVID-19 restrictions has given renewed urgency to questions about remote learning that have remained unresolved. Web-based assessment software provides instructors an array of options for varying testing parameters, but the pedagogical impacts of some of these variations has yet to be…
Descriptors: Test Items, Test Format, Computer Assisted Testing, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Hande, Vasudha; Jayan, Parvathy; Kishore, M. Thomas; Bhaskarapillai, Binukumar; Kommu, John Vijay Sagar – Journal of Intellectual Disabilities, 2023
Identifying the determinants of positive coping is a critical step in empowering the parents of children with intellectual disability. In this context, this study aims to develop a scale to assess the determinants of positive coping. Accordingly, culturally relevant items were pooled, got validated by experts and refined. The scale was…
Descriptors: Parents, Coping, Intellectual Disability, Children
Peer reviewed Peer reviewed
Direct linkDirect link
Gustafsson, Martin; Barakat, Bilal Fouad – Comparative Education Review, 2023
International assessments inform education policy debates, yet little is known about their floor effects: To what extent do they fail to differentiate between the lowest performers, and what are the implications of this? TIMSS, SACMEQ, and LLECE data are analyzed to answer this question. In TIMSS, floor effects have been reduced through the…
Descriptors: Achievement Tests, Elementary Secondary Education, International Assessment, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Kwok, Henry – Journal of Education Policy, 2023
This article contributes to the critical policy studies of educational governance and its crisis, through canvassing Basil Bernstein's concept of the 'totally pedagogised society' (TPS). The TPS witnesses not only the growth of transnational private actors, but also the disjuncture between global and national agendas of reform, on the governance…
Descriptors: Educational Change, Governance, Court Litigation, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Süleyman Demir; Derya Çobanoglu Aktan; Nese Güler – International Journal of Assessment Tools in Education, 2023
This study has two main purposes. Firstly, to compare the different item selection methods and stopping rules used in Computerized Adaptive Testing (CAT) applications with simulative data generated based on the item parameters of the Vocational Maturity Scale. Secondly, to test the validity of CAT application scores. For the first purpose,…
Descriptors: Computer Assisted Testing, Adaptive Testing, Vocational Maturity, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Yan; Huang, Chao; Liu, Jia – Journal of Educational and Behavioral Statistics, 2023
Cognitive diagnostic computerized adaptive testing (CD-CAT) is a cutting-edge technology in educational measurement that targets at providing feedback on examinees' strengths and weaknesses while increasing test accuracy and efficiency. To date, most CD-CAT studies have made methodological progress under simulated conditions, but little has…
Descriptors: Computer Assisted Testing, Cognitive Tests, Diagnostic Tests, Reading Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Langbeheim, Elon; Akaygun, Sevil; Adadan, Emine; Hlatshwayo, Manzini; Ramnarain, Umesh – International Journal of Science and Mathematics Education, 2023
Linking assessment and curriculum in science education, particularly within the topic of matter and its changes, is often taken for granted. Some of the fundamental elements of the assessment, such as the choice of wording and visual representations, as well as its relation to the curricular sequence, remain understudied. In addition, very few…
Descriptors: Student Evaluation, Evaluation Methods, Science Education, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Fatih Orcan – International Journal of Assessment Tools in Education, 2023
Among all, Cronbach's Alpha and McDonald's Omega are commonly used for reliability estimations. The alpha uses inter-item correlations while omega is based on a factor analysis result. This study uses simulated ordinal data sets to test whether the alpha and omega produce different estimates. Their performances were compared according to the…
Descriptors: Statistical Analysis, Monte Carlo Methods, Correlation, Factor Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Qiwei He – International Journal of Assessment Tools in Education, 2023
Collaborative problem solving (CPS) is inherently an interactive, conjoint, dual-strand process that considers how a student reasons about a problem as well as how s/he interacts with others to regulate social processes and exchange information (OECD, 2013). Measuring CPS skills presents a challenge for obtaining consistent, accurate, and reliable…
Descriptors: Cooperative Learning, Problem Solving, Test Items, International Assessment
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Andrew M. Olney – Grantee Submission, 2023
Multiple choice questions are traditionally expensive to produce. Recent advances in large language models (LLMs) have led to fine-tuned LLMs that generate questions competitive with human-authored questions. However, the relative capabilities of ChatGPT-family models have not yet been established for this task. We present a carefully-controlled…
Descriptors: Test Construction, Multiple Choice Tests, Test Items, Algorithms
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hanif Akhtar – International Society for Technology, Education, and Science, 2023
For efficiency, Computerized Adaptive Test (CAT) algorithm selects items with the maximum information, typically with a 50% probability of being answered correctly. However, examinees may not be satisfied if they only correctly answer 50% of the items. Researchers discovered that changing the item selection algorithms to choose easier items (i.e.,…
Descriptors: Success, Probability, Computer Assisted Testing, Adaptive Testing
Pages: 1  |  ...  |  32  |  33  |  34  |  35  |  36  |  37  |  38  |  39  |  40  |  ...  |  636