Interstitial Interruptions

It has slowly dawned on me since the start of this dubious ‘age of generative AI’ that multiple experiences, perspectives, and knowledge of the topic within higher education are inevitable. Indeed, these multiplicities can sit within an individual – take me, for example. I am constantly listening to the inner monologue, asking, What’s my position? What’s my position?

I have found myself being the annoying person in the room, trying – mostly failing – to push against getting carried away by hyped-up narratives of ‘embracing’. Meanwhile I see colleagues grappling with what it means for their beloved disciplines and their students’ learning. All the while, I periodically dip into ChatGPT for my own uses.

Reflecting now in January 2025 on the 8 months since our workshop, and the spaces I had imagined we could have opened up during that 90 minutes, I still have that queasy feeling of traversing values and practices that don’t quite fall into alignment. I still don’t know which way is up.

I needed some steady hands – colleagues I trusted and admired to shine an intellectual and values-based light into the murk. With an approaching deadline for the Networked Learning 2024 conference submission, I persuaded Helen Beetham, Rosemarie McIlwhan and Catherine Cronin to help bring a rather ambitious, slightly crazy idea to life for Malta in May 2024. Catherine has written a lovely reflection and details of the workshop itself, capturing it and her own insights beautifully (go read it).

Generating AI Alternatives

The workshop was designed to challenge the beguiling promises of generative AI. It was not going to be efficient, we were aiming to “produce knowledge at a human speed and scale” (Drumm, et al., 2024).

Responding to our provocations (they were not prompts!), participants were asked to draw on rough-tuned thought and instinct – not on scraped datasets or zero shot learning. In just 90 minutes, nearly 100 participants produced a remarkable range of creative responses: on paper, in pen, paint, card, markers, amended printed words, digital texts, poems, anecdotes, drawings, paper sculptures, slides, music, video – some even created with generative AI tools, other in digital media, some analogue.

And in that space between all the structures and spaces of higher education, conferences and the “apparatus” of AI (as Dan MacQuillan aptly calls it in his interview on Helen Beetham’s podcast), we tried to interrupt the relentless business as usual. Like intrusive interstitial ads on a website, we wanted our provocations to get in the way, demand attention and reaction.

The Workshop

The workshop itself is a bit of blur for me. As the only facilitator on site, I was managing the material: the materials, people and reliably unreliable technology, so my attention was split. Still, I remember being blown away by the plenary discussion and the passion, thoughtfulness and generosity of those who spoke and contributed to the text. People spoke of their struggles and that of their colleagues and students. I don’t regret not recording it; it was meant to be a delimited and ephemeral moment – something that can’t be paused and replayed (one reason why I worked in theatre for 10 years). As participants posted their responses on the Padlet, the diversity in thinking, imagination, and heartfelt authenticity was clear. It was a richness I could never (nor any generative AI) have predicted.

A story cube: roll it to see what you should do!

There was no homogeneity to the range of artefacts produced during the workshop. No common overriding ideology or singular positionality emerged. Some participants used generative AI tools to create images, others shared positive anecdotes, some critiqued AI capabilities, and others found themes of resistance and sought hope in hopeless contexts.

This for me is the key takeaway. Where generative AI tools converge (what it ‘knows’) of human knowledge into a median point of homogeneity, this workshop provided a counterpoint; a resistance to groupthink and striving for meta themes. Differences could co-exist and had to connect to each other (something I’ve been thinking and writing about a lot in the last year). And again, in opposition to what generative AI would do, there are no binaries or false equivalence of on the one hand and the other; it was unbalanced and slightly messy; it was generating human generation.

What’s Next?

We’re excited to launch a website soon, which will host all the resources and ideas for the workshop, so anyone can take any of these ideas and run with them. Watch out too for further blog posts from my co-conspirators, and more plans to come. If you were one of the participants in the workshop, check your email as we’ll be in touch soon about what’s next.

Malta, May 2024

My Guidelines on Generative AI for Students on MSc in Blended and Online Education

These are the guidelines I have (re)written for my students this year. I’m encouraging them to explore generative AI thoughtfully, responsibly and critically; it is built into my teaching and their learning activities now. I’ve included our wording (co-crafted with Dr Stuart Taylor) for the assessment cover sheet. Feel free to share, adapt and reuse (it’s CC-BY-NC).

We need to teach Critical AI Literacies and we need to teach them now

As the internet – well the parts of the internet inhabited by developers – melts down over the release of GPT-4, the successor to GPT-3/3.5 which learns from images and videos, I’ve been thinking about the responsibilities we educators have to our learners. It’s a foregone conclusion that many workplaces and work practices will increasingly be augmented by AI tools. Curiously, it is not necessarily the promised land of automation of all those tedious jobs where a human has to look across multiple platforms, files or databases in order to assemble, update or generate something meaningful. (How many times a day do I copy and paste via Notepad to strip formatting?? Why can’t Windows remember the last folder I was working in??) So much for AI freeing up time, if anything, the cognitive and administrative load on educators is increasing because of AI.

Image generated by Dall-E with the prompt “someone inside a computer”

So where should that limited energy and effort go?

My opinion is that it should go on helping students develop skills to understand, critique, and generally deal with AI platforms, or at least be prepared to engage with the (hopefully) regulated and ethically sound platforms of the future. I’m not for a moment saying that we should get all students onto ChatGPT now, but we need to start thoughtful discussions with them about AI. They themselves are best placed to ask the really difficult questions, like what is the carbon footprint of a ChatGPT search? What cis white Western, male biases are AI tools replicating? Can new knowledge be made or will we be eternally returning to a increasingly bland lowest common denominator of what we already know?

Prompt engineering is absolutely a skill that students should be developing. As is editing and refinement of the output of AI. Even more so is the skill of knowing when it is appropriate to use AI and when not to.

In addition, there is a skill we will all have to relearn; reading. Reading is no longer what it used to be now we know that what we are reading may be been generated by a non-thinking, predictive model. Skim reading, fast reading, knowing when to skip whole paragraphs or jump to the relevant bits will all be massively important when we a drowning in sea of content. And that’s just for text; images, video, audio are all going to have to be viewed, watched and listened to with circumspection. This places another layer of barriers for disabled students or students to are being taught in a language different to their first language; how does one skim read with a screen reader or when you need to live translate in your head as you go?

The internet fire hose of stuff is about to get an upgrade, and we all need to (wet)suit up. And this time, it will be using all the knowledge of how easily manipulated we are into outrage and spreading hate and lies. Digital media literacy, data literacy, science literacy; let’s throw them all in, because without these as graduate attributes, any idea of the university as a ‘good’ for society is left for dead.

Behind it all, there are some people and companies who will be making a lot of money. It’s never been more important to interrogate the ‘black box’ of a technology, especially as the debate rages on about whether developers can any longer see into the innards of the algorithms. Surely now is the time to start equipping ourselves and our students with critical digital and AI literacies?

Artificial intelligence inserting doubt into the relationship between educators and learners

As the various responses are washing over us in education about the implications of artificial intelligence such as ChatGPT, I’ve thinking about its consequences for the relational aspects of education.

Just as deep fake video, AI generated images and even naturalistic voice platforms make us second-guess the veracity and provenance of what we are seeing or hearing, human-like text generation has inserted a doubt into our minds. The first doubt is of the educator of their own skills; can they discern what is student-generated work and what is not? The second is the more obvious question of whether the work they are spending time grading and giving feedback upon is the words, thoughts and accurate reflection of a human’s learning. In combination, these doubts therefore become present whenever a lecturer sets about the task of grading and/or giving feedback on student work: has artificial intelligence has been used or not? So the potential impact on students, who are putting in time and their original work, is their work is, by default, potentially being treated with distrust from the start.

Robot
Photo by Alex Knight on Pexels.com

This leads to the other area of doubt, which is on the learner’s part. They may doubt that their work or effort is being taken at face value as their own effort. Secondly, taken to the next logical level, they may doubt that any personalised feedback and grades they seemingly receive from a human educator may in fact have been generated by AI. This ‘weaponisation’ of AI can be by both sides looking for efficiency, or simply a crutch to prop up a lingering doubt that their own work is really any better than AI (yes, academics have imposter syndrome as much as students).

While I don’t fully subscribe to the thesis from Adrian Wallbank’s piece in The Times Higher that AI should be resisted and kept completely away from the classroom (good luck policing that), I agree that assessment should be used as a process for students to reflect on their learning:

“What I suggest ought to be assessed (and which helps us navigate some of the issues posed by ChatGPT) is a record of the student’s personal, but academically justified, reflections, arguments, philosophising and negotiations. Student work would then be a genuine, warts-and-all record of the process of learning, rather than the submission of a “performative” product or “right argument”, as one of the students in my research so aptly put it. This would enable our students to become better thinkers.”

Ben Thomson, the excellent technology journalist (another sector and profession that is having an existential moment of crisis about AI), also contributes a parent’s view of the education situation and says the new skills learners could develop are editing and verifying information. It’s not a bad point and perhaps an obvious end-point for the information abundance students live within now and in the future. Seeking out the human skills needed to work with AI-generated content and assessing those skills is a good way to go.

As I gather advice and resources for colleagues to help us mull over the short-term and long-term strategies we need to employ, I don’t think I can resist any longer the thought that this is a game-changing moment for education. In a YouTube video, from  in Charles Knight, he puts it well: the economic model upon which higher education has be operating – that is, the time-pressured systems of assessment for staff and students, relying often on precarious labour – has left itself vulnerable to gamification. I’d argue that gamification of that system is now in the hands of everyone, staff and students. Knight rightly calls that now is the time for appropriate resourcing of staff workloads to enable them to design assessments and time to grade them. I can only add to that, investing in people – those who teach and support learners – is more important than paying money for technologies to catch people out. As many before me have observed, the latter is an arms race that cannot be won. Teaching and learning is relational and it’s through prioritising that with time, money and status will higher education be equipped to deal with the doubt and distrust inserting itself between educators and students.