Inviting thoughts about AI in education
I’ve been thinking a lot lately about AI. Maybe too much. But it has been an interest for decades: I especially loved Hofstadter’s GEB, which I first read when I was a bachelor student in Canada. And now AI has become a crisis. So I was asked to give some advice, both here at the Heymans Institute and at the NWO.
In parallel with my historical and theoretical research on adjacent topics, I’ve been trying to help our colleagues get a grip on the information overload that sometimes causes contemporary discussions of AI to stall. Beyond sharing the usual warnings about hallucinations and our persistent inability to detect its use without false positives, I’ve been especially concerned about its impact on how we teach writing. Because I think the nuances will be relevant to future discussions, I’ve also lately been trying to figure out the differences between AI literacy and AI competency (as well as uncritical AI and the slightly different definitions of responsible AI—one Dutch, the other European).
I’ve shared at a BSS Educational Afternoon, at a ReproducibiliTEA seminar (with Mandy van der Gaag), and at the university-wide Education Festival. I organized a discussion session at an international conference this summer in Paris, and spoke about possible futures in my presidential address to the Society for the History of Psychology at the American Psychological Association convention in Denver. I’ve also served as proefkonijn for several educational innovations (like ESI’s AI for Instructors course, EduSupport’s resource on AI in education, and CIT’s Critical AI Literacy Module for students in Brightspace). And I contributed to a report about implications of AI for diversity and inclusion, as well as to the application for an ENLIGHT network that seeks to collect and share insights across our wider community.
Before bringing all of this home to update my earlier advice (again with Mandy van der Gaag), I first wanted to ask you about your thoughts. Even though it would be faster and more efficient to “ask the toaster,” as my student Sofia Cernakova has aptly taken to saying, efficiency isn’t what I value in this instance: I want to move the discussion forward with you, if you’re willing, together. (Mindwise also seemed like an ideal place to take this next step: we’ve shared thoughts about writing many times—including reflections on AI and creativity by Eric Rietzschel—and it’s a venue everyone has equal access to.)
How do you feel about the rise of AI, and especially LLMs, since the start of ChatGPT’s “free preview” at the end of 2022? And how have you been thinking about adapting your assignments to reflect its widespread use?
We have colleagues who are switching away from assigning essays because they say there’s no point in using them for assessment. I have sympathy for this, where exchanging writing assignments for something else might also be appropriate for the learning goals of individual courses. It’s just that I disagree with cutting writing completely from our curriculum.
I still think writing is a useful and important skill to learn at uni: it’s extremely difficult to explain technical topics to non-specialist audiences, including the public, and I don’t know where else you might be taught to do that. I also find it very satisfying, personally, to figure out how to clearly express ideas that I developed myself. I’ve even earned a living doing it. And I’d like our students to have that opportunity too. So I think we’d all lose something significant if we were to stop teaching students to do it well. In short, therefore, I’d like to have a conversation here at home about how we can preserve the teaching of writing in psychology education.
We’re not making toast. We’re collaborating to have wonderful ideas.
Yes, I accept that the days of using essays as singular proxies to demonstrate mastery are probably over. I’m just not ready to accept defeat when it comes to the teaching of writing itself. There are things we can do to preserve it.
My own master-level Writing Skills course has always been more processual than product-oriented. My students do lots of little exercises that contribute to the construction of a portfolio, including peer reviews and replies and revisions. (So many small pieces, at such a tempo, that replacing them all with AI would still involve enough actual work to be instructive in itself.) They also come to understand that the point is the journey, not the destination: my view of writing as a professional explainer of difficult ideas is that it is first about figuring out what you think, clearly, and only sharing afterward. This is a writing course, though, not a content course that uses the product of writing as a proxy that’s then given a grade.
My other master-level course, called Boundaries of Psychology, has always required a presentation and a final essay. But I’m more interested there in how the students define and articulate and demarcate their boundaries and boundary-work and trading zones, etc., than in the quality of the writing itself. Just defining the prompt to get ChatGPT to write the kind of essay I want would show an impressive mastery of the material. Indeed, in the next edition of the course, I may even add some exercises to make that sort of complicated prompt-engineering easier to put together. (This would shift the summative focus away from the essay itself toward its construction, but—again—that’s something I want to discuss before deciding.)
Other courses I’m involved with use Perusall to enable the students to read together, and share annotations, before coming to discuss the assigned text in a seminar. Or we require writing and presentations in a course conference that includes live Q&A from both students and staff: not something you could just show up and do unless you knew your material well. But I also know, through my involvement at the Teaching Academy’s Community of Practice on AI in Education, that more is possible. Systems are even coming together that can give students specialized feedback, saving time for more substantive interactions with peers and supervisors. Including from our own Sebastiaan Mathôt and Wouter Kruijne!
So I’m curious: do you have any responses to AI that you’re particularly pleased to have developed for your courses? (Have you implemented them, or just thought about them?)
Do you use an online system that tracks edits, or do you require a draft to be written in pen? Or do you have your students write on a computer in a controlled environment like Aletta Jacobs Hall? Do you allow your students to write at home but then also require them to submit their substantive revisions? Do you have them build a portfolio showing their process as they respond to peer review and AI feedback?
Do you show them how many revisions it usually takes you before an idea becomes a publishable manuscript? (This essay is substantially revised from a post at LinkedIn, which was itself substantially revised: it started as just a few lines, then went through a dozen substantive revisions as I discussed it with colleagues and students.)
Please share your thoughts below. I’d much rather think along together than give decontextualized advice based solely on what I’ve read about that’s being done elsewhere. I’d also much prefer to hear about what you’ve been finding in your classrooms that works with our students. Figuring it out together is part of what AI can’t do, too, and I’m keen to preserve that human side of our educational practice.
We’re not making toast. We’re collaborating to have wonderful ideas. How do we ensure that what seems to me to be inevitable AI-use helps us do more of that? (And that it takes away less from what we value?)
Addendum for students
My apologies: I realized at the last minute that I wrote this specifically for my colleagues. Your teachers. (Those for whom I was asked to provide advice, and who will ultimately decide what your classroom experience at uni will involve.) But I haven’t ever addressed myself to you: those whom my advice would affect. This, though, isn’t because you are an afterthought. Actually, this essay was written after I prepared my lesson on AI for the Reflecting on Science and Integrity ReMa course. But that seminar discussion also isn’t meant to be about AI in education. So there’s a gap. I therefore invite you, formally, to share below. And if there is sufficient interest, we could have a Town Hall in the Heymans Cantine where we can unpack everything and start moving forward together.
N.B. Despite my two different uses of the em-dash, which has mistakenly been popularized as a sign of AI, none of the text here was generated by AI. Although, of course, the material linked-to might include AI-generated text (including, obviously, my “ask the toaster” query).
Feature image by Jan Luyken (1649–1712),
“Siege of Groningen in which a cannon is blessed by six monks, 1672”
(Made accessible via Creative Commons by the Rijksmuseum, Amsterdam)
I like how you ask us for our ‘responses to AI’ – that’s much broader than just how we use it, or how we resist it. At the moment, one of my main responses is that I have started to explore ways of getting more involved with processes, and less with products. The work that we ask students to produce was always meant to either get them to go through a certain process, or to assess whether they have done so. But for the sake of efficiency we have been focusing more and more on the products, sometimes (in my opinion) leaving students too much to their own devices in managing their process. So my creative challenge at the moment is to get back to the process, the heart of the teaching and learning experience, without completely eating up all the time I also need to spend on other things. For me personally, this also means a fight against the ‘efficiency’ that AI offers. Generative AI, I have come to realize, is like Roundup. It is an unbelievably powerful tool that will help people maximize their productvitiy and efficiency. Yet it might also leave behind a barren waste where nothing else will grow. I myself prefer inefficient diversity.
Thank you, Jeremy, for encouraging this conversation and creating space to think and speak more openly about AI in education. I believe that’s really important, since many conversations around AI still happen in secrecy (or not at all), especially on the student side. There seems to be a gap between how students actually use AI and the assumptions some educators might have. I also wonder how students and educators from different departments engage with AI and what kinds of cultures or norms exist around it.
I really resonated with Eric Rietzschel’s comment about shifting focus from products to process. It made me think about how, in many cases, students are expected to manage their learning journey almost entirely on their own (or at least, that’s how it often feels), especially when the emphasis is placed so heavily on outcomes. I wonder: do educators feel a need to create more space for focusing on process? And if they do, are there institutional or structural possibilities to support that?
Turning to students, I also wonder: would that kind of support help? In this strange new adventure of figuring out academia (now with a crazy AI toaster in the mix), would more guidance be welcome? If there were spaces where students could come together and safely speak about the messy, confusing, and sometimes overwhelming journey of learning, would we show up? Would we ask for help?
In reflecting on the teaching of developmental psychology, I would like to highlight a dimension of constructivist pedagogy that remains fundamentally irreplaceable by artificial intelligence: the clinical interview. Requiring students to undertake a clinical encounter, or more broadly any kind of field inquiry, continues to be invaluable for several reasons. This was already the case before the advent of generative AI, but the point has become even more evident in its aftermath. To function capably in the field, students must first acquire a solid grasp of theory in order to generate good data; they must then return from those data to theory, using it as a lens through which meaningful analysis can be undertaken. This circuit linking theory, methodology, and analysis constitutes a highly productive detour, one that fosters both teaching and learning in a way that no artificial system can replicate.
It is, of course, unrealistic to expect that every course within a psychology curriculum can draw on such experiences. Yet it is unfortunate that those courses for which it is possible increasingly tend to abandon them. The clinical interview, like other experiential practices, forces students into contact with complexity, ambiguity, and the interpretive demands of real human interaction. This makes the exercise pedagogically rich not only for the acquisition of knowledge but also for the cultivation of critical judgment. Where direct clinical encounters are impractical, it remains possible to “re-edit” these kinds of experiences in the form of group work or team projects, either inside or outside the classroom. It is precisely through the negotiation of meanings with others that learning processes are compelled to resort to strategies that no artificial intelligence can approximate.
This perspective also aligns with the broader pedagogical thesis that it is more fruitful to emphasize process rather than outcomes. Summative assessments have long been subject to criticism, and in the present context that critique becomes more pressing than ever. What truly matters is not the polished final product, but the intellectual and methodological journey by which students come to clarify their understanding. In this sense, the incorporation of authentic, dialogical, and process-oriented practices such as the clinical interview ensures that psychology education retains what is most essential: the irreducible human encounter at the heart of knowledge construction.
It should be clear, finally, that what I am proposing is not a new way of integrating artificial intelligence into the classroom but rather a strategy for avoiding its most impoverished uses such as asking it to draft a report or to supply a ready-made answer to a narrowly defined question. The process I have mentioned, moving from theory to data and back again while engaging in analysis and negotiating meaning in collective settings, cannot be reduced to such shortcuts. Yet within that broader circuit, AI may certainly have a role to play as an auxiliary tool in the drafting process, as a resource in the search for relevant information, or even as an additional voice in the unfolding of a group debate. Properly situated, AI becomes not a replacement for critical pedagogical practice but a potential interlocutor that enriches it.
Thanks for the engaging discussion and the post full of interesting references. I am writing as a mathematician here, and without a clear answer. I think for such a discussion we need to become a bit more comfortable with the technology, also to understand better how it can be used and misused, and be able to reason about what to do.
I am seeing still many people avoid thinking about this, until it bites back during the courses. I don’t have a good solution, in my case I am also trying to shift the attention to the process, leaving plenty of optional opportunities for students to practice and get feedback, in the hope of reducing incentives to cheat.
I have written down some of my more recent reflections in my blog, in case you may find them interesting: see https://www.mseri.me/ai-in-education-some-food-for-thought/ and https://www.mseri.me/unai-education/
My hope is that we can use this as an opportunity to find new ways to support personalized learning enhancing instead of harming practice, repetition and social interactions. But to achieve this, I think we need to find the time to develop the course and the policy with the students themselves, and iterate and improve a few times before settling on something.