Benchmark Suite for Explaining failures in autograding - AI Explainability Accountability
- Mentors
- Leilani Gilpin
- Organization
- UC OSPO
- Technologies
- python, javascript, html, git, bash
- Topics
- ai, XAI, Explainable AI, ML
The project aims to address a common challenge faced by both beginner and advanced programmers: understanding and learning from mistakes in code submissions. Traditional autograding systems often provide generic feedback, which may not be sufficient for students to grasp the root causes of their errors and improve their coding skills effectively.
To overcome this limitation, the project proposes the development of a sophisticated system that can analyze code submissions comprehensively. This system will consist of custom drivers designed for each programming problem, capable of detecting various types of errors and providing tailored explanations for them. These explanations will cover not only low-level syntax errors but also higher-level issues such as algorithmic design flaws or incorrect usage of data structures.
Moreover, the project includes the creation of an intuitive interface with a structured roadmap covering programming topics from basic to advanced levels. This interface will offer a gamified learning experience, complete with progress indicators and visual feedback mechanisms to keep users engaged and motivated. By integrating the custom drivers and feedback pipeline into this interface, the project aims to provide a seamless learning experience for users at all skill levels.
Utilizing open-source tools like Monaco code editor and Judge0 for code execution ensures compatibility and accessibility across different platforms. Additionally, the project emphasizes continuous feedback and refinement through interactions with mentors, instructors, and users. This iterative approach allows for the continuous improvement of the system, ensuring its effectiveness in assisting learners in understanding programming concepts and improving their coding skills.