NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 704 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gamon Savatsomboon; Prasert Ruannakarn; Phamornpun Yurayat; Ong-art Chanprasitchai; Jibon Kumar Sharma Leihaothabam – European Journal of Psychology and Educational Research, 2024
Using R to conduct univariate meta-analyses is becoming common for publication. However, R can also conduct multivariate meta-analysis (MMA). However, newcomers to both R and MMA may find using R to conduct MMA daunting. Given that, R may not be easy for those unfamiliar with coding. Likewise, MMA is a topic of advanced statistics. Thus, it may be…
Descriptors: Educational Psychology, Multivariate Analysis, Evaluation Methods, Data Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Austin M. Shin; Ayaan M. Kazerouni – ACM Transactions on Computing Education, 2024
Background and Context: Students' programming projects are often assessed on the basis of their tests as well as their implementations, most commonly using test adequacy criteria like branch coverage, or, in some cases, mutation analysis. As a result, students are implicitly encouraged to use these tools during their development process (i.e., so…
Descriptors: Feedback (Response), Programming, Student Projects, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Paganin, Sally; Paciorek, Christopher J.; Wehrhahn, Claudia; Rodríguez, Abel; Rabe-Hesketh, Sophia; de Valpine, Perry – Journal of Educational and Behavioral Statistics, 2023
Item response theory (IRT) models typically rely on a normality assumption for subject-specific latent traits, which is often unrealistic in practice. Semiparametric extensions based on Dirichlet process mixtures (DPMs) offer a more flexible representation of the unknown distribution of the latent trait. However, the use of such models in the IRT…
Descriptors: Bayesian Statistics, Item Response Theory, Guidance, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Ernesto Panadero; Alazne Fernández Ortube; Rebecca Krebs; Julian Roelle – Assessment & Evaluation in Higher Education, 2025
Rubrics play a crucial role in shaping educational assessment, providing clear criteria for both teaching and learning. The advent of online rubric platforms has the potential to significantly enhance the effectiveness of rubrics in educational contexts, offering innovative features for assessment and feedback through the creation of erubrics.…
Descriptors: Scoring Rubrics, Teaching Methods, Learning Processes, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
Kylie E. Hunter; Mason Aberoumand; Sol Libesman; James X. Sotiropoulos; Jonathan G. Williams; Jannik Aagerup; Rui Wang; Ben W. Mol; Wentao Li; Angie Barba; Nipun Shrestha; Angela C. Webster; Anna Lene Seidler – Research Synthesis Methods, 2024
Increasing concerns about the trustworthiness of research have prompted calls to scrutinise studies' Individual Participant Data (IPD), but guidance on how to do this was lacking. To address this, we developed the IPD Integrity Tool to screen randomised controlled trials (RCTs) for integrity issues. Development of the tool involved a literature…
Descriptors: Integrity, Randomized Controlled Trials, Participant Characteristics, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Jiangang Hao; Alina A. von Davier; Victoria Yaneva; Susan Lottridge; Matthias von Davier; Deborah J. Harris – Educational Measurement: Issues and Practice, 2024
The remarkable strides in artificial intelligence (AI), exemplified by ChatGPT, have unveiled a wealth of opportunities and challenges in assessment. Applying cutting-edge large language models (LLMs) and generative AI to assessment holds great promise in boosting efficiency, mitigating bias, and facilitating customized evaluations. Conversely,…
Descriptors: Evaluation Methods, Artificial Intelligence, Educational Change, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Kasani, Hamed Abbasi; Mourkani, Gholamreza Shams; Seraji, Farhad; RezaeiZadeh, Morteza; Aghazadeh, Solmaz; Abedi, Hojjat – Educational Technology Research and Development, 2023
The purpose of this study was to develop and measure the usability of the software prototype of formative assessment in the LMS. This study was applied in terms of research objective and mixed method (qualitative-quantitative) in terms of data collection in which an exploratory sequential mixed methods design was used. In addition, in order to…
Descriptors: Computer Software, Formative Evaluation, Usability, Design
Peer reviewed Peer reviewed
Direct linkDirect link
Leen Adel Gammoh – Education and Information Technologies, 2025
This qualitative study examines the risks educators in Jordan face with the integration of ChatGPT, an emerging AI technology, into academic settings. While considerable attention has been given to risks affecting university students, there remains a gap in understanding the specific challenges encountered by educators themselves. Through…
Descriptors: Foreign Countries, Artificial Intelligence, Educational Technology, Technology Integration
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Emmanuel Senior Tenakwah; Gideon Boadu; Emmanuel Junior Tenakwah; Michael Parzakonis; Mark Brady; Penny Kansiime; Shannon Said; Sarah Eyaa; Raymond Kwojori Ayilu; Ciprian Radavoi; Alan Berman – Knowledge Management & E-Learning, 2025
The development and introduction of AI language models have transformed the way humans and institutions interact with technology, enabling natural and intuitive communication between humans and machines. This paper conducts a competence-based analysis of ChatGPT's task responses to provide insights into its language proficiency, critical analysis…
Descriptors: Higher Education, Evaluation Methods, Artificial Intelligence, Computer Software
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Xue Zhou; Lilian Schofield – Journal of Learning Development in Higher Education, 2024
This paper proposes a conceptual framework for integrating Artificial Intelligence (AI) into the curriculum. It builds on previous conceptual papers, which provided initial suggestions on integrating AI into teaching. The approach to developing the conceptual framework includes drawing on existing frameworks, AI literature, and case studies from…
Descriptors: Artificial Intelligence, Technological Literacy, Technology Integration, Curriculum Development
Peer reviewed Peer reviewed
Direct linkDirect link
Farkhanda Qamar; Naveed Ikram – Education and Information Technologies, 2024
Curriculum and its operative application have always been of key importance in educational system and its significance increases when it comes to higher education. The importance of an efficient and effective curriculum is acknowledged in recent studies, but the mechanisms used for preparation of curriculum are still human-intensive, tedious, and…
Descriptors: Undergraduate Study, Evaluation Methods, Engineering Education, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Preya Bhattacharya – International Journal of Social Research Methodology, 2023
In the last few years, Qualitative Comparative Analysis (QCA) has become one of the most important data analysis methods in comparative research. According to the guidelines of this method, there are certain steps that a researcher needs to follow, before causally analyzing the data for necessary and sufficient conditions. One of these steps is…
Descriptors: Evaluation Methods, Comparative Analysis, Social Science Research, Computer Software
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Demir, Suleyman – International Journal of Assessment Tools in Education, 2022
This study aims to compare normality tests in different sample sizes in data with normal distribution under different kurtosis and skewness coefficients obtained simulatively. To this end, firstly, simulative data were produced using the MATLAB program for different skewness/kurtosis coefficients and different sample sizes. The normality analysis…
Descriptors: Sample Size, Comparative Analysis, Computer Software, Evaluation Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tan, Teck Kiang – Practical Assessment, Research & Evaluation, 2022
Power analysis based on the analytical t-test is an important aspect of a research study to determine the sample size required to detect the effect for the comparison of two means. The current paper presents a reader-friendly procedure for carrying out the t-test power analysis using the various R add-on packages. While there is a growing of R…
Descriptors: Programming Languages, Sample Size, Bayesian Statistics, Intervention
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kane Meissel; Esther S. Yao – Practical Assessment, Research & Evaluation, 2024
Effect sizes are important because they are an accessible way to indicate the practical importance of observed associations or differences. Standardized mean difference (SMD) effect sizes, such as Cohen's d, are widely used in education and the social sciences -- in part because they are relatively easy to calculate. However, SMD effect sizes…
Descriptors: Computer Software, Programming Languages, Effect Size, Correlation
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  47