Dr. Daniel Zingaro and Dr. Leo Porter discuss using large language models (LLMs) in the classroom, including reducing syntax errors and the need to memorize APIs. They address ethical concerns of relying on commercial tools and cheating, and emphasize the importance of skills like reading code and test cases.
Read more
AI Summary
Highlights
AI Chapters
Episode notes
auto_awesome
Podcast summary created with Snipd AI
Quick takeaways
LLMs allow for a shift in focus from syntax errors to higher-level problem-solving skills in programming education.
Integration of LLMs in programming education requires a discussion on the impact, ethics, and equitable access to these tools.
Emphasizing code reading alongside code writing in programming education enhances students' understanding of code and problem-solving approaches.
Deep dives
The Shift in Teaching Programming: Using LLMs in Introductory CS Courses
Dr. Leo Porter and Dr. Daniel Zengaro discuss the decision to incorporate LLMs (Large Language Models) into the introductory CS1 courses they teach. They explain that the increasing impact of LLMs on programming projects led them to realize the need for changes in how programming is taught. Instead of focusing extensively on syntax, they highlight the importance of teaching skills such as reading code, writing effective test cases, debugging, and problem decomposition. LLMs assist in automating syntax-related tasks, freeing up time for students to concentrate on higher-level problem-solving skills. They also note the potential for LLMs to change the interface and workflow of programming, as students could rely more on prompts and regenerate code as needed. However, they emphasize the importance of teaching principles and a systematic approach to using LLMs to ensure long-term success.
Tools and Technology: VS Code with CoPilot and CoPilot Chat
The instructors recommend using VS Code with CoPilot as the primary tool for students to generate code. The integration of the IDE with CoPilot simplifies the code generation process and allows students to immediately test the code in the same environment. Additionally, they suggest utilizing CoPilot Chat or Chat GPT for discussing modules, libraries, and other questions related to the code. CoPilot performs well in suggesting appropriate libraries and discussing their pros and cons. They also highlight the importance of students learning how to write their own test cases, as the generated code may have logical errors that need to be identified and fixed through testing.
Ethical and Equity Concerns: Inequity in Access and the Future of Code Ownership
The instructors express concerns regarding inequities in access to LLMs due to potential paywalls and subscription models for these tools. They emphasize the need to ensure that students from all socioeconomic backgrounds have access to these tools for educational purposes. They also raise ethical questions surrounding copyright, ownership of generated code, and the implications on intellectual property. While acknowledging the benefits of LLMs in terms of code generation and productivity, they urge educators to engage students in discussions on the impact and implications of using LLMs, as well as promote ethical practices in code development and ownership.
Teaching Code Reading as a Core Skill
In programming courses, teaching code reading is considered an important skill that students should acquire alongside writing code. By emphasizing code reading early on, students become better prepared to tackle larger projects and understand existing code bases. The goal is to disconnect coding from just writing syntax and syntax errors and focus more on solving meaningful problems. The introduction of large language models can assist in code reading by providing a variety of code styles that students can learn from and engage with. This approach enhances their understanding of what makes code more readable and exposes them to different ways of solving problems.
Shifting Assessment and Incorporating Authentic Tasks
The integration of large language models in education necessitates a shift in assessment methods. Traditional autograded assignments are being reevaluated, as they often fail to measure students' level of knowledge accurately. Instead, the focus is on assigning more authentic and open-ended projects, such as building websites or solving real-world problems. By connecting programming to meaningful contexts, students are motivated to learn and create, reducing the temptation to cheat. This shift in assessment requires faculty to adapt their grading methods, potentially relying on human assessment and innovative techniques like assessing explanations through video recordings. There is optimism that this transition will provide a more equitable and valuable learning experience for students.
Dr. Daniel Zingaro and Dr. Leo Porter, co-authors of the book Learn AI-Assisted Python Programming, speak with host Jeremy Jung about teaching programming with the aid of large language models (LLMs). They discuss writing a book to use in Leo's introductory CS class and explore how GitHub Copilot de-emphasizes syntax errors, reduces the need to memorize APIs, and why they want students to write manual test cases. They also discuss possible ethical concerns of relying on commercial tools, their impact on coursework, and why they aren't worried about students cheating with LLMs.
Get the Snipd podcast app
Unlock the knowledge in podcasts with the podcast player of the future.
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode
Save any moment
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Share & Export
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode