Teaching & Learning with ChatGPT: Opportunity or Quagmire? Part II
How Can We Use Generative AI to Support and Enhance Student Learning?
As described in our previous post, the unavoidable entanglement with generative AI tools represents a unique and optimistic moment for deeply refining our approach to classroom work of all kinds. Though (of course!) creating additional work for instructors, this is a prime opportunity to consider how the thoughtful integration of AI tools into our subjects can support and enhance student learning and establish a workable foundation for engaging in the long term with AI technologies across MIT classrooms and learning spaces.
In this post, we provide some guidance and associated resources for the use of generative AI in assignments and assessments in subjects across the Institute.
How might generative AI prompt us to reconsider and refine goals for student learning?
Before considering the affordances or annoyances of generative AI in your teaching context, it is important to critically examine your real goals for student learning. Are there levels of higher-order thinking – more complex, more authentic learning goals – that you’d like students to achieve? If so, you can begin to explore the ways that generative AI tools help students achieve those goals. In particular, you may wish to consider:
- How (or if) the technology can enable students to engage more meaningfully and authentically with the course material and/or the discipline overall?
- How you might redesign your assignments and assessments to leverage generative AI to better support meaningful student learning?
- Engaging with ChatGPT and examining how it handles your current assignment prompts and problems. Think back to your ideal goals for student learning – for most of us, these goals are not achievable by generative AI. Consider how you can modify your assignments to support your actual goals for student learning.
The Process of Student Learning
For many instructors, thinking about the process of student learning and the assessment of that process – may be a useful way to (1) help students develop the habits of mind and skills essential to the discipline (or subject) and (2) shift the focus of student learning assessment away from end products that may lend themselves to chatbot plagiarism.
Higher education author and consultant John Warner recently commented: “One of the hallmarks of growing sophistication as a writer is seeing the idea you thought you were expressing change in front of your eyes as you are writing. This is high-level critical thinking. This kind of emergent rethinking is an experience that every college-level writer should be familiar with.” (Warner, 2022).
And, as Nancy Gleason director of the Hilary Ballon Center for Teaching and Learning at NYU Abu Dhabi, wrote recently in The Times Higher Ed, “[…the assessment of only] a completed product is no longer viable. Scaffolding [and assessing] the skills and competencies associated with writing, producing and creating is the way forward.” (Gleason, 2022).
This is particularly relevant here at MIT, where developing students as critical thinkers and problem solvers are primary and essential goals of an MIT education and cornerstones of the campus ethos. Experts in a field are comfortable “playing” with multiple solution paths and ideas – i.e., hitting dead ends – and learning from these mistakes to eventually formulate solutions (see reading suggestions below at Resources on Expert v. Novice Learners). Many novices (our students included) believe if they don’t see the solution right away, that they have failed. Learning how to solve problems involves learning from failed solution attempts and accepting that initial “failure” is almost always part of developing a successful solution. Here, a focus on the process, in addition to the product, can help students achieve our goals for them as MIT graduates and minimize chatbot plagiarism. Consider the usefully prescient comments of cognitive and learning scientist Michelene Chi in her 1994 paper on the role of self-explanations in the improvement of student science understanding:
“…especially for challenging science domains….students should learn to be able to talk science (to understand how the discourse of the field is organized, how viewpoints are presented, and what counts as arguments and support for these arguments), so that students can participate in scientific discussions, rather than just hear science.” (Chi, 1994)
Using generative AI in your assignments
Incorporating this technology in your assignments will generally involve asking your students to critique and compare – or even iteratively improve upon – AI-generated content (see additional reading suggestions below in the Resources section). We call attention to a continually expanding set of ideas as instructors from all disciplines proactively wrestle with these new challenges.
Here, for example, Lucinda McKnight, senior lecturer in pedagogy and curriculum at Deakin University, offers several suggestions for incorporating AI writers into student assignments, including:
- Use AI writers as researchers. They can research a topic exhaustively in seconds and compile text for review, along with references for students to follow up. This material can then inform original and carefully referenced student writing.
- Use AI writers to produce text on a given topic for critique. Design assessment tasks that involve this efficient use of AI writers, then [ask students to provide] critical annotation of the text that is produced.
- Use different AI writers to produce different versions of text on the same topic to compare and evaluate.
- Use and attribute AI writers for routine text, for example, blog content. Use discrimination to work out where and why AI text, human text, or hybrid text are appropriate and give accounts of this thinking.
- Research and establish the specific affordances of AI-based content generators for your discipline. For example, how might it be useful to be able to produce text in multiple languages in seconds? Or create text optimized for search engines?
- Explore different ways AI writers and their input can be acknowledged and attributed ethically and appropriately in your discipline. Model effective note-making and record-keeping. Use formative assessment that explicitly involves discussion of the role of AI in given tasks. Discuss how AI could lead to various forms of plagiarism and how to avoid this. (McKnight, 2022).
In subjects that use problem sets, ask students to explain their thought processes as they solve (a subset of) the problems. A few (of many possible) helpful prompts may include asking them to describe:
- Why they chose a particular method;
- Why they made certain assumptions and/or simplifications;
- Where they ran into dead ends, and how they found their way forward; and
- What broader takeaways they learned from solving the problem.
Developing students’ metacognitive skills, by requiring them to self-regulate and self-explain their solution process may mitigate their use of AI-generated responses. Self-evidently, it is much more difficult to explain their problem-solving process when they didn’t actually solve the problem! If a student uses generative AI in some aspect of the solution, the requirement that they document their thought processes will force them to engage a bit deeper with certain aspects of the problem and the learning process overall.
In their paper, Mollick & Mollick offer detailed descriptions of ways to leverage programs like ChatGPT in student assignments. They suggest that “…AI can be used to overcome three barriers to learning in the classroom: improving transfer, breaking the illusion of explanatory depth, and training students to critically evaluate explanations.” (Mollick & Mollick, 2022). In line with the suggestions of McKnight above, they provide detailed examples of AI-leveraged assignments to support deeper student learning.
What’s Out There?
Finally, whether you plan to leverage AI or push back against its use in your subjects – it is useful to know about existing AI tools and applications. For a comprehensive and current list, see https://www.futurepedia.io/.
WE ARE HERE TO HELP
Would you like to rethink your real goals for student learning?
Would you like to redesign your assignments and assessments (and possibly the way you teach) to better support those goals?
Are you interested in leveraging the utility of generative AI to create more meaningful assignments and more authentic learning experiences?
Contact us (TLL@mit.edu) with your suggestions, questions, and ideas. What are your strategies for engaging with this new reality? We are happy to collaborate with you on the development of effective approaches and to share ideas with the MIT community.
General Resources
Higher Ed
- Brake, Josh (2022). Education in the World of ChatGPT. The Absent-Minded Professor Blog.
- Bruff, Derek (2022). Three Things to Know about AI Tools and Teaching, Agile Learning Blog.
- D’Agostino, Susan (2023). ChatGPT Advice Academics Can Use Now, Inside Higher Ed.
- Fyfe, Paul (2022). How to cheat on your final paper: Assigning AI for student writing. AI & Society.
- Gleason, Nancy (2022). ChatGPT and the rise of AI writers: how should higher education respond?, Times Higher Education.
- Grobe, Christopher (2023). Why I’m Not Scared of ChatGPT: The limits of the technology are where real writing begins. Chronicle of Higher Education.
- Klopfer, Eric & Reich, J. (2023) and Calculating the Future of Writing in the Face of AI. Comparative Media Studies & Writing @ MIT
- McMurtrie, Beth (2023). Teaching: Will ChatGPT Change the Way You Teach?, Chronicle of Higher Education.
- McKnight, Lucinda (2022, October 14). Eight ways to engage with AI writers in higher education. Times Higher Education.
- Mollick, Ethan R. and Mollick, Lilach (2022). New Modes of Learning Enabled by AI Chatbots: Three Methods and Assignments. Available on SSRN.
- Mondschein, Ken (2022). Avoiding Cheating by AI: Lessons from Medieval History Medievalists.net.
- Schiappa, Edward & Montfort, Nicholas (2023). Advice Concerning the Increase in AI-Assisted Writing, Comparative Media Studies & Writing @ MIT.
- Stokel-Walker, Chris (2022). AI bot ChatGPT writes smart essays — should professors worry? Nature.
- Watkins, Marc (2022). AI Will Augment, Not Replace [Writing], Inside Higher Education.
- Comparative Media Studies & Writing @ MIT Schiappa, Edward & Montfort, Nicholas (2023). Advice Concerning the Increase in AI-Assisted Writing, Klopfer, Eric & Reich, J. (2023) and Calculating the Future of Writing in the Face of AI.
- University of Michigan’s Center for Research on Learning & Teaching (2023). ChatGPT: Implications for Teaching and Student Learning.
- Warner, John (2022). The Biggest Mistake I See College Freshmen Make. Slate.
General
- Bogost, Ian (2022). ChatGPT Is Dumber Than You Think, Atlantic.
- Roose, Keven Roose (2022). The Brilliance and Weirdness of ChatGPT, NYTimes.
Resources for Supporting Self-Explanations
- Chi, M. T. H., de Leeuw, N., Chiu, M.H., LaVancher, C. (1994). Eliciting self-explanations improves understanding. Cognitive Science, 18, 439-477.
- Chi, M. T., Bassok, M., Lewis, M. W., Reimann, P., & Glaser, R. (1989). Self-explanations: How students study and use examples in learning to solve problems. Cognitive Science, 13(2), 145–182.
- Crippen, Kent J., Earl, Boyd L.(2007). The impact of web-based worked examples and self-explanation on performance, problem solving, and self-efficacy. Computers & Education, 49(3), pp. 809-821.
Resources on Expert v. Novice Learners
- Hardiman, P.T., Dufresne, R. & Mestre, J.P. (1989). The relation between problem categorization and problem solving among experts and novices. Memory & Cognition 17, 627–638. https://doi.org/10.3758/BF03197085
- Larkin, J., McDermott, J., Simon, D.P., & Simon, H. (1980). Expert and Novice Performance in Solving Physics Problems. Science, 208(4450). pp. 1335-1342. DOI:10.1126/science.208.4450.1335
- Polya, G. (2014). How to Solve It: A New Aspect of Mathematical Method. Princeton University Press.
- Wankat, P.C., and F.S. Oreovicz (2015). Teaching Engineering, Second Edition. Chapter 5 – Problem Solving & Creativity (pp. 93-115). Purdue University Press. (Open Access Edition)
References
Chi, M. T. H., de Leeuw, N., Chiu, M.H., LaVancher, C. (1994). Eliciting self-explanations improves understanding. Cognitive Science, 18, 439-477.
Gleason, Nancy (2022, December 9). ChatGPT and the rise of AI writers: how should higher education respond?, Times Higher Education.
McKnight, Lucinda (2022, October 14). Eight ways to engage with AI writers in higher education. Times Higher Education.
Mollick, Ethan R. and Mollick, Lilach (2022, December 13). New Modes of Learning Enabled by AI Chatbots: Three Methods and Assignments. Available on SSRN.
Warner, John (2022, August 31). The Biggest Mistake I See College Freshmen Make. Slate.