Inviting thoughts about AI in education
I’ve been thinking a lot lately about AI. Maybe too much. But it has been an interest for decades: I especially loved Hofstadter’s GEB, which I first read when I was a bachelor student in Canada. And now AI has become a crisis. So I was asked to give some advice, both here at the Heymans Institute and at the NWO.
In parallel with my historical and theoretical research on adjacent topics, I’ve been trying to help our colleagues get a grip on the information overload that sometimes causes contemporary discussions of AI to stall. Beyond sharing the usual warnings about hallucinations and our persistent inability to detect its use without false positives, I’ve been especially concerned about its impact on how we teach writing. Because I think the nuances will be relevant to future discussions, I’ve also lately been trying to figure out the differences between AI literacy and AI competency (as well as uncritical AI and the slightly different definitions of responsible AI—one Dutch, the other European).
I’ve shared at a BSS Educational Afternoon, at a ReproducibiliTEA seminar (with Mandy van der Gaag), and at the university-wide Education Festival. I organized a discussion session at an international conference this summer in Paris, and spoke about possible futures in my presidential address to the Society for the History of Psychology at the American Psychological Association convention in Denver. I’ve also served as proefkonijn for several educational innovations (like ESI’s AI for Instructors course, EduSupport’s resource on AI in education, and CIT’s Critical AI Literacy Module for students in Brightspace). And I contributed to a report about implications of AI for diversity and inclusion, as well as to the application for an ENLIGHT network that seeks to collect and share insights across our wider community.
Before bringing all of this home to update my earlier advice (again with Mandy van der Gaag), I first wanted to ask you about your thoughts. Even though it would be faster and more efficient to “ask the toaster,” as my student Sofia Cernakova has aptly taken to saying, efficiency isn’t what I value in this instance: I want to move the discussion forward with you, if you’re willing, together. (Mindwise also seemed like an ideal place to take this next step: we’ve shared thoughts about writing many times—including reflections on AI and creativity by Eric Rietzschel—and it’s a venue everyone has equal access to.)
How do you feel about the rise of AI, and especially LLMs, since the start of ChatGPT’s “free preview” at the end of 2022? And how have you been thinking about adapting your assignments to reflect its widespread use?
We have colleagues who are switching away from assigning essays because they say there’s no point in using them for assessment. I have sympathy for this, where exchanging writing assignments for something else might also be appropriate for the learning goals of individual courses. It’s just that I disagree with cutting writing completely from our curriculum.
I still think writing is a useful and important skill to learn at uni: it’s extremely difficult to explain technical topics to non-specialist audiences, including the public, and I don’t know where else you might be taught to do that. I also find it very satisfying, personally, to figure out how to clearly express ideas that I developed myself. I’ve even earned a living doing it. And I’d like our students to have that opportunity too. So I think we’d all lose something significant if we were to stop teaching students to do it well. In short, therefore, I’d like to have a conversation here at home about how we can preserve the teaching of writing in psychology education.
We’re not making toast. We’re collaborating to have wonderful ideas.
Yes, I accept that the days of using essays as singular proxies to demonstrate mastery are probably over. I’m just not ready to accept defeat when it comes to the teaching of writing itself. There are things we can do to preserve it.
My own master-level Writing Skills course has always been more processual than product-oriented. My students do lots of little exercises that contribute to the construction of a portfolio, including peer reviews and replies and revisions. (So many small pieces, at such a tempo, that replacing them all with AI would still involve enough actual work to be instructive in itself.) They also come to understand that the point is the journey, not the destination: my view of writing as a professional explainer of difficult ideas is that it is first about figuring out what you think, clearly, and only sharing afterward. This is a writing course, though, not a content course that uses the product of writing as a proxy that’s then given a grade.
My other master-level course, called Boundaries of Psychology, has always required a presentation and a final essay. But I’m more interested there in how the students define and articulate and demarcate their boundaries and boundary-work and trading zones, etc., than in the quality of the writing itself. Just defining the prompt to get ChatGPT to write the kind of essay I want would show an impressive mastery of the material. Indeed, in the next edition of the course, I may even add some exercises to make that sort of complicated prompt-engineering easier to put together. (This would shift the summative focus away from the essay itself toward its construction, but—again—that’s something I want to discuss before deciding.)
Other courses I’m involved with use Perusall to enable the students to read together, and share annotations, before coming to discuss the assigned text in a seminar. Or we require writing and presentations in a course conference that includes live Q&A from both students and staff: not something you could just show up and do unless you knew your material well. But I also know, through my involvement at the Teaching Academy’s Community of Practice on AI in Education, that more is possible. Systems are even coming together that can give students specialized feedback, saving time for more substantive interactions with peers and supervisors. Including from our own Sebastiaan Mathôt and Wouter Kruijne!
So I’m curious: do you have any responses to AI that you’re particularly pleased to have developed for your courses? (Have you implemented them, or just thought about them?)
Do you use an online system that tracks edits, or do you require a draft to be written in pen? Or do you have your students write on a computer in a controlled environment like Aletta Jacobs Hall? Do you allow your students to write at home but then also require them to submit their substantive revisions? Do you have them build a portfolio showing their process as they respond to peer review and AI feedback?
Do you show them how many revisions it usually takes you before an idea becomes a publishable manuscript? (This essay is substantially revised from a post at LinkedIn, which was itself substantially revised: it started as just a few lines, then went through a dozen substantive revisions as I discussed it with colleagues and students.)
Please share your thoughts below. I’d much rather think along together than give decontextualized advice based solely on what I’ve read about that’s being done elsewhere. I’d also much prefer to hear about what you’ve been finding in your classrooms that works with our students. Figuring it out together is part of what AI can’t do, too, and I’m keen to preserve that human side of our educational practice.
We’re not making toast. We’re collaborating to have wonderful ideas. How do we ensure that what seems to me to be inevitable AI-use helps us do more of that? (And that it takes away less from what we value?)
Addendum for students
My apologies: I realized at the last minute that I wrote this specifically for my colleagues. Your teachers. (Those for whom I was asked to provide advice, and who will ultimately decide what your classroom experience at uni will involve.) But I haven’t ever addressed myself to you: those whom my advice would affect. This, though, isn’t because you are an afterthought. Actually, this essay was written after I prepared my lesson on AI for the Reflecting on Science and Integrity ReMa course. But that seminar discussion also isn’t meant to be about AI in education. So there’s a gap. I therefore invite you, formally, to share below. And if there is sufficient interest, we could have a Town Hall in the Heymans Cantine where we can unpack everything and start moving forward together.
N.B. Despite my two different uses of the em-dash, which has mistakenly been popularized as a sign of AI, none of the text here was generated by AI. Although, of course, the material linked-to might include AI-generated text (including, obviously, my “ask the toaster” query).
Feature image by Jan Luyken (1649–1712),
“Siege of Groningen in which a cannon is blessed by six monks, 1672”
(Made accessible via Creative Commons by the Rijksmuseum, Amsterdam)