The allure of LLMs as professional support at a time of crisis within higher education
Machine writing has arrived at a time of intensifying pressure within many higher education systems. Financial constraints lead to changes in the organisation of academic work, particularly with regard to the role played by teaching. Political polarisation drives a greatest contestation of academic authority, sometimes even harassment of academics. The shifting plate tectonics of knowledge, stemming from social and technological transformation, create the risk that recognised expertise will be rendered redundant. Universities are increasingly torn asunder between leaders who see themselves as equipping their institution to survive in a hostile climate and academics who see the ensuing disruption as an expression of that very hostility (Rosenberg 2023).
Within this challenging landscape, large language models have emerged not just as technical tools, but as psychological presences in academic life. It can be immensely difficult to work in these conditions. This is exactly why we need to give serious thought to how LLMs might feel to academics under these circumstances. These friendly assistants are constantly available, willing to consider any request and always encouraging. They are never irritable, distracted, passive aggressive or tired. They never prioritise someone else over us. They don’t impose expectations on us. They can make mistakes, confuse us or act in ways contrary to our intentions. But as we become more skilled at talking with them, these occasions come to feel like the exception rather than the rule. In the seething cauldrons of ambient stress and interpersonal antagonism which many universities have become, at least some of the time, these are evocative characteristics. If we see our working life as assailed on all sides by hostile forces, if we see our jobs as under impending or future risk, the omnipresent ally able and willing to support us through the working day is going to be extremely attractive.
The psychological comfort offered by these systems creates a complex relationship that goes beyond their technical capabilities. When human relationships in academia become strained by institutional pressures, the consistency and apparent care of AI systems can feel like a welcome respite.
AI literacy is an important feature of how academics engage with the opportunities and challenges presented by LLMs; it’s essential that users of these models have a broad understanding of how they operate, how they’re trained and the limitations entailed by this (Carrigan 2024: ch 3). However it’s possible to have a cognitive understanding of these issues while still relating to the models in complex and potentially problematic ways. For example I’ve determinedly insisted on using ‘it’ if I have to refer to LLMs using a pronoun in conversation. Yet I recently slipped a ‘he’ into the conversation when referring to Anthropic’s Claude despite the fact I was half way through my second academic monograph on the subject. I immediately corrected myself but it stuck with me because it illustrates how these associations and assumptions can linger on in the psyche, complicating the reflective views we hold on a particular subject.
I know Claude isn’t a ‘he’ and I often remind my students of the same thing when I see them falling into this habit. Is there nonetheless part of me which feels that Claude is a ‘he’? Which imagines Claude as a ‘he’? Which wants Claude to be a ‘he’? The point I’m making is not one about my own psychology but rather illustrating how there’s more to our reaction to LLMs than can be adequately captured in the intellectual views and opinions we offer about them. You can’t ensure academics have an accurate and effective sense of what models are how to engage with them simply through providing routes to knowledge about LLMs, important though such knowledge undoubtedly is. I would suggest that we must go deeper and that writing is a fascinating frame through which to explore these issues.