We need to teach Critical AI Literacies and we need to teach them now

As the internet – well the parts of the internet inhabited by developers – melts down over the release of GPT-4, the successor to GPT-3/3.5 which learns from images and videos, I’ve been thinking about the responsibilities we educators have to our learners. It’s a foregone conclusion that many workplaces and work practices will increasingly be augmented by AI tools. Curiously, it is not necessarily the promised land of automation of all those tedious jobs where a human has to look across multiple platforms, files or databases in order to assemble, update or generate something meaningful. (How many times a day do I copy and paste via Notepad to strip formatting?? Why can’t Windows remember the last folder I was working in??) So much for AI freeing up time, if anything, the cognitive and administrative load on educators is increasing because of AI.

Image generated by Dall-E with the prompt “someone inside a computer”

So where should that limited energy and effort go?

My opinion is that it should go on helping students develop skills to understand, critique, and generally deal with AI platforms, or at least be prepared to engage with the (hopefully) regulated and ethically sound platforms of the future. I’m not for a moment saying that we should get all students onto ChatGPT now, but we need to start thoughtful discussions with them about AI. They themselves are best placed to ask the really difficult questions, like what is the carbon footprint of a ChatGPT search? What cis white Western, male biases are AI tools replicating? Can new knowledge be made or will we be eternally returning to a increasingly bland lowest common denominator of what we already know?

Prompt engineering is absolutely a skill that students should be developing. As is editing and refinement of the output of AI. Even more so is the skill of knowing when it is appropriate to use AI and when not to.

In addition, there is a skill we will all have to relearn; reading. Reading is no longer what it used to be now we know that what we are reading may be been generated by a non-thinking, predictive model. Skim reading, fast reading, knowing when to skip whole paragraphs or jump to the relevant bits will all be massively important when we a drowning in sea of content. And that’s just for text; images, video, audio are all going to have to be viewed, watched and listened to with circumspection. This places another layer of barriers for disabled students or students to are being taught in a language different to their first language; how does one skim read with a screen reader or when you need to live translate in your head as you go?

The internet fire hose of stuff is about to get an upgrade, and we all need to (wet)suit up. And this time, it will be using all the knowledge of how easily manipulated we are into outrage and spreading hate and lies. Digital media literacy, data literacy, science literacy; let’s throw them all in, because without these as graduate attributes, any idea of the university as a ‘good’ for society is left for dead.

Behind it all, there are some people and companies who will be making a lot of money. It’s never been more important to interrogate the ‘black box’ of a technology, especially as the debate rages on about whether developers can any longer see into the innards of the algorithms. Surely now is the time to start equipping ourselves and our students with critical digital and AI literacies?

Artificial intelligence inserting doubt into the relationship between educators and learners

As the various responses are washing over us in education about the implications of artificial intelligence such as ChatGPT, I’ve thinking about its consequences for the relational aspects of education.

Just as deep fake video, AI generated images and even naturalistic voice platforms make us second-guess the veracity and provenance of what we are seeing or hearing, human-like text generation has inserted a doubt into our minds. The first doubt is of the educator of their own skills; can they discern what is student-generated work and what is not? The second is the more obvious question of whether the work they are spending time grading and giving feedback upon is the words, thoughts and accurate reflection of a human’s learning. In combination, these doubts therefore become present whenever a lecturer sets about the task of grading and/or giving feedback on student work: has artificial intelligence has been used or not? So the potential impact on students, who are putting in time and their original work, is their work is, by default, potentially being treated with distrust from the start.

Robot
Photo by Alex Knight on Pexels.com

This leads to the other area of doubt, which is on the learner’s part. They may doubt that their work or effort is being taken at face value as their own effort. Secondly, taken to the next logical level, they may doubt that any personalised feedback and grades they seemingly receive from a human educator may in fact have been generated by AI. This ‘weaponisation’ of AI can be by both sides looking for efficiency, or simply a crutch to prop up a lingering doubt that their own work is really any better than AI (yes, academics have imposter syndrome as much as students).

While I don’t fully subscribe to the thesis from Adrian Wallbank’s piece in The Times Higher that AI should be resisted and kept completely away from the classroom (good luck policing that), I agree that assessment should be used as a process for students to reflect on their learning:

“What I suggest ought to be assessed (and which helps us navigate some of the issues posed by ChatGPT) is a record of the student’s personal, but academically justified, reflections, arguments, philosophising and negotiations. Student work would then be a genuine, warts-and-all record of the process of learning, rather than the submission of a “performative” product or “right argument”, as one of the students in my research so aptly put it. This would enable our students to become better thinkers.”

Ben Thomson, the excellent technology journalist (another sector and profession that is having an existential moment of crisis about AI), also contributes a parent’s view of the education situation and says the new skills learners could develop are editing and verifying information. It’s not a bad point and perhaps an obvious end-point for the information abundance students live within now and in the future. Seeking out the human skills needed to work with AI-generated content and assessing those skills is a good way to go.

As I gather advice and resources for colleagues to help us mull over the short-term and long-term strategies we need to employ, I don’t think I can resist any longer the thought that this is a game-changing moment for education. In a YouTube video, from  in Charles Knight, he puts it well: the economic model upon which higher education has be operating – that is, the time-pressured systems of assessment for staff and students, relying often on precarious labour – has left itself vulnerable to gamification. I’d argue that gamification of that system is now in the hands of everyone, staff and students. Knight rightly calls that now is the time for appropriate resourcing of staff workloads to enable them to design assessments and time to grade them. I can only add to that, investing in people – those who teach and support learners – is more important than paying money for technologies to catch people out. As many before me have observed, the latter is an arms race that cannot be won. Teaching and learning is relational and it’s through prioritising that with time, money and status will higher education be equipped to deal with the doubt and distrust inserting itself between educators and students.