On the value of human intelligence: developing arguments against the use of Generative AI in non-Assured Assessments

Matthew Allen

Abstract

NOTE: Due to a technical issue, Matt was not able to present during the Symposium. The recording of the presentation is uploaded here.

This paper is a response to the recent Assessment Architecture at UNE and its recognition that for many forms of traditional assessment – notably essay-writing which is widely used in the humanities – we cannot assure ourselves that students have not used so-called Generative ‘Artificial Intelligence’ (Gen-AI). Underlying the architecture is an implicit assumption that without such assurance, students will cheat. Viewed from the perspective of the sociology of deviance, this is a kind of control theory grounded in (arguably discredited) ideas about rational choice, specifically that in the absence of constraints, humans are as self-serving as possible. In contrast, my paper is grounded in a very different understanding of how students think and choose. By articulating the ethical and political reasons not to use gen-AI, I aim to encourage students to make an informed decision about not using it. In particular, I want to persuade students that developing their skills in reading, thinking and writing, independent of Gen-AI, is essential to the humanities. I therefore call on students to train their human intelligence, rather than a privately owned technology that is implicated in the resurgence of fascism.

Download synopsis

Back to Symposium Program