Simple Science

Cutting edge science explained simply

# Computer Science# Human-Computer Interaction

The Impact of AI Assistants on Programming Education

Examining how CodeTutor affects student learning in programming courses.

― 5 min read


AI in ProgrammingAI in ProgrammingEducationin programming.How AI tools transform student learning
Table of Contents

The use of AI assistants, particularly Large Language Models (LLMs), in computer science education has gained attention. These models can help students learn but also raise concerns about potential misuse. Many studies have focused on how LLMs work or short-term experiments, but not much research has been done on how these tools affect students over a long time in actual Programming classes. This study aims to fill that gap by examining how an AI assistant called CodeTutor impacts learning in an introductory programming course.

Study Overview

We conducted a semester-long study involving 50 students to see how using CodeTutor would affect their Learning Outcomes. Students were divided into two groups: one group used CodeTutor while the other was taught using traditional methods, including human teaching assistants. We looked at their final scores and feedback to assess how effective the AI tool was.

Key Findings

  1. Improvement in Scores: Students who used CodeTutor showed a noticeable improvement in their final scores compared to those who did not.
  2. Experiences of New Users: Students with no prior experience using AI Tools benefitted the most from CodeTutor.
  3. Feedback on CodeTutor: Most students found that CodeTutor understood their questions and helped them with programming syntax. However, they expressed concerns about its role in developing critical thinking skills.
  4. Shifts in Preferences: Over the semester, students began to prefer human teaching assistants over CodeTutor for help.
  5. Usage for Different Tasks: Students used CodeTutor for various tasks, including completing assignments and debugging code. The clarity of their questions significantly impacted how effective the AI responses were.

Importance of the Study

This research is essential because it provides insight into how AI tools can be integrated into computer science education. By understanding their impact, educators can develop better strategies to enhance student learning while minimizing potential pitfalls.

The Role of AI in Education

AI has been making waves in various fields, including education. The emergence of tools like GitHub Copilot and ChatGPT showcases their capability to address complex problems in a way that resembles human interaction. However, while these tools offer vast opportunities for enhancing learning, concerns about misuse, particularly in academic settings, cannot be ignored.

Challenges in Computer Science Education

Entry-level programming courses often face challenges when it comes to students completing their assignments. With the capabilities of LLMs, students may find it tempting to bypass the learning process in search of quick fixes. This has raised alarms about the integrity of the educational process and prompted researchers to explore how these tools could be used responsibly.

Study Methodology

To address these concerns, we set out to determine the effects of CodeTutor on student learning in an introductory programming course.

Participants

Fifty undergraduate students enrolled in an introductory computer science course were selected for this study. Participants were divided into a control group, which relied on traditional learning methods, and an experimental group that utilized CodeTutor.

Study Structure

The study was conducted from September to December 2023. Participants were asked to complete a pre-test to establish baseline knowledge and then engage with either CodeTutor or traditional methods throughout the semester.

Data Collection

Final scores were the primary metrics for measuring learning outcomes. Comparisons were made between the two groups to see how well they performed based on their respective teaching methods.

Results

Performance Comparison

Students using CodeTutor registered an average increase in scores, while those in the control group showed a slight decrease. Statistical tests confirmed that the improvement in the experimental group was significant.

User Experience

Feedback from participants indicated that while many appreciated CodeTutor's ability to assist with syntax and queries, there were lingering doubts about its capacity to foster critical thinking skills.

Shifting Preferences

As the semester progressed, students expressed a growing preference for traditional teaching assistants, highlighting a shift in their engagement with CodeTutor.

Understanding User Engagement

Throughout the study, we examined how students interacted with CodeTutor. This included looking at the types of questions they asked and their responses.

General Patterns

A total of 82 conversation sessions took place, covering various programming topics and issues. It was noted that students’ clarity in posing questions played a significant role in how effective the AI's answers were.

Types of Questions

Students often sought help on specific programming tasks, syntax issues, and debugging. A classification of messages revealed that user-generated prompts significantly impacted the quality of responses from CodeTutor.

Prompt Quality Analysis

The quality of the prompts submitted by students was evaluated. About 37% of prompts were classified as high quality, which typically led to more effective responses. Poor quality prompts often failed to provide enough detail, which hindered the AI's ability to assist adequately.

Implications for Education

The findings of this study carry important implications for how educators might incorporate AI tools into their courses.

Promoting AI Literacy

Teaching students how to interact effectively with AI tools like CodeTutor is crucial. The ability to formulate clear questions improves the overall experience and learning outcomes.

Balancing AI and Human Interaction

While CodeTutor offers many benefits, it is evident from the research that human teachers play an irreplaceable role in education. The combination of AI support with personal teaching can lead to the best results in student learning outcomes.

Future Directions

Further research is needed to explore how AI can be optimized for educational contexts. The challenges surrounding the detection of AI use and ensuring that students engage meaningfully in their learning must also be addressed.

Conclusion

Our study provides valuable insights into the effectiveness of AI assistants in computer science education. While CodeTutor showed promise in enhancing student learning, concerns regarding critical thinking and a preference for human assistance emerged over time. Moving forward, educators must find ways to integrate AI tools responsibly while ensuring that students develop the necessary skills to thrive in their academic pursuits.

Original Source

Title: Evaluating the Effectiveness of LLMs in Introductory Computer Science Education: A Semester-Long Field Study

Abstract: The integration of AI assistants, especially through the development of Large Language Models (LLMs), into computer science education has sparked significant debate. An emerging body of work has looked into using LLMs in education, but few have examined the impacts of LLMs on students in entry-level programming courses, particularly in real-world contexts and over extended periods. To address this research gap, we conducted a semester-long, between-subjects study with 50 students using CodeTutor, an LLM-powered assistant developed by our research team. Our study results show that students who used CodeTutor (the experimental group) achieved statistically significant improvements in their final scores compared to peers who did not use the tool (the control group). Within the experimental group, those without prior experience with LLM-powered tools demonstrated significantly greater performance gain than their counterparts. We also found that students expressed positive feedback regarding CodeTutor's capability, though they also had concerns about CodeTutor's limited role in developing critical thinking skills. Over the semester, students' agreement with CodeTutor's suggestions decreased, with a growing preference for support from traditional human teaching assistants. Our analysis further reveals that the quality of user prompts was significantly correlated with CodeTutor's response effectiveness. Building upon our results, we discuss the implications of our findings for integrating Generative AI literacy into curricula to foster critical thinking skills and turn to examining the temporal dynamics of user engagement with LLM-powered tools. We further discuss the discrepancy between the anticipated functions of tools and students' actual capabilities, which sheds light on the need for tailored strategies to improve educational outcomes.

Authors: Wenhan Lyu, Yimeng Wang, Tingting, Chung, Yifan Sun, Yixuan Zhang

Last Update: 2024-05-02 00:00:00

Language: English

Source URL: https://arxiv.org/abs/2404.13414

Source PDF: https://arxiv.org/pdf/2404.13414

Licence: https://creativecommons.org/licenses/by/4.0/

Changes: This summary was created with assistance from AI and may have inaccuracies. For accurate information, please refer to the original source documents linked here.

Thank you to arxiv for use of its open access interoperability.

More from authors

Similar Articles