Research Notes - LucasWolke/code-tutor GitHub Wiki
Research Notes
Key Takeaways
Focus on scaffolded guidance (step-by-step help that builds understanding) instead of long detailed response that solves entire problem.
Use interactive support like Socratic questioning (open ended questions to encourage thinking about problem) to engage students.
Integration into workflow, i.e LLM directly in IDE is really important for engagement.
Provide explanations and pseudocode, not just full code solutions.
Add guardrails to stop students from cheating/learning wrong patterns like poor/very short prompts.
Adaptive feedback based on student skills, behavior and query quality(?).
Find balance between providing enough help (so students actually use it) and not doing the heavy lifting for students.
Non goal?: Although many papers focus on teaching LLMs (learn how to prompt better, how to work with LLMs), the goal of code tutor is to help students learn programming, not to learn how to use LLMs
Papers
Beyond Traditional Teaching: Large Language Models as Simulated Teaching Assistants in Computer Science
Proposes interactive LLMs, asks clarifying questions until model has enough info to generate better quality responses
Improves students prompting skills, suggests that actually understanding code is not necessarily improved -> Better at prompting != Better at understanding code
Compiler-Integrated, Conversational AI for Debugging CS1 Programs
ChatGPT3.5-turbo, CS1, Debugging for compile- and run-time errors
Proposes conversational AI for debugging CS1 programs, integrated into compiler
Uses Socratic Method to guide learning through targeted questions that lead students to solutions
Promising, integrating within existing workflows is crucial for adoption, engagement in llm conversation -> more time and effort invested from students
Plans future work with open source llms, costs were only 0.10$ per student (1000) over 8 week period
Enhancing CS1 Education through Experiential Learning with Robotics Projects
Proposes robotics instead of traditional development projects for CS1
Students had better exam scores compared to control group
Reasons: LLMs couldnt generate good code - more self-reliance, takes longer to rerun code - less trial and error, more engagement - less frustration with code syntax
Personalized Parsons Puzzles as Scaffolding Enhance Practice Engagement Over Just Showing LLM-Powered Solutions