AI in Research

A pretext to talk about accountability, text production, ideas, expressions, distributed connectivity and new paradigms
A banana on a table and an image of a banana on a laptop on the same table. Each of the two bananas has a white frame around it with the word 'Banana' sticked on top of it
Ceci n’est pas une banane - Max Gruber / Better Images of AI / CC-BY 4.0

In April 2025, the Faculty of Arts and Humanities at King’s College London convened a townhall on AI in Research which was very well attended across departments, centres and institutes. Its scope was inclusive, both in terms of:

  • topics - from provocations around College guidance on GEN-AI to snapshots on research projects adopting and challenging advanced AI methods; and,
  • technical literacy - everyone interested in how AI is changing or could change research and education, no matter their proficiency in AI tools, was invited to attend and encouraged to express their opinion.

The King’s Digital Lab (KDL) team has used Machine Learning and AI in a few research projects in the past (see for example this retrospective by Senior Research Software Engineer Geoffroy Noel) and more recently has defined a research theme on AI/ML to integrate ongoing analysis and experimentation with guidance and critical reflections. The work of team members engaging with this research theme spans discussion of practical problems and use cases encountered in projects such as Sculpting Time with Computers Proof of Concept (led by Daniel Chavez Heras) and Social Dynamics of People’s War pipelines (led by Jonathan Fennel), as well as collaborative reflections on processes that can better guide KDL involvement with these technologies and our collaboration with partners. Interestingly, whether practice or processes-focused, this work tends to highlight the importance of principles (e.g. quality, responsibility, accountability, verifiability and reproducibility, and security) which can act as guiding values and a compass to technical strategy and operations in collaborative research projects.

It is not surprising therefore that the provocation offered by one of KDL’s affiliates, Patrick ffrench, Professor of French in the Department of Languages, Literatures and Cultures, at the above-mentioned AI townhall struck a chord and resonated with KDL’s preoccupation as a research software engineering team increasingly engaged with AI.

What follows is a dialogue between Arianna Ciula (AC), Director and Senior Research Software Analyst at KDL, and Patrick (Pff), from which some key points of resonance emerge.

AC: Patrick, I liked how you introduced your provocation positioning yourself and outlining your reaction to College guidance so I thought it was useful to repeat it here.

Pff: I was speaking as an ‘ordinary’ researcher, let’s say, with no basis other than the solicitation AI makes to me as a researcher in Arts and Humanities, as someone interested in the questions: i.e. how can I use it? But also, how should I not use it? I started with the second question, because in the course of thinking about this event I was led to the KCL ‘Guidance for the Responsible Use of Generative AI in Research’ produced by the Research Integrity Office, and the emphasis there is on the challenges to research integrity posed by the use of AI. There is a paper with a bibliography, a one-page summary, and an infographic. In brief, the guidance revolves around five factors:

  • Validity (is the GAI output accurate?);
  • Plagiarism (is it appropriately referenced?);
  • Transparency (is it clear how it has been used?);
  • Confidentiality (is any data input to it secure?) and
  • Publication (does the use of GAI conform to publisher’s criteria?)

AC: The guidance focuses on Generative AI which for obvious reasons has dominated the public discourse and pushed Higher Education Institutions to regulate principles and practices of use. Interestingly, your provocation went straight to unpack what principles in turn are implied by the regulations. This impressed me as a lucid assessment which clarifies how our attitude to technology is all but free from cultural assumptions. Can you please go through it again here for our readers?

Pff: So, I was struck by two aspects of the guidance, which for me pose further questions. First, it relies on a fundamental principle of accountability of the human author. The rationale behind the various checks on AI outputs underlined here is that an identifiable human individual can be held accountable for the research and their ownership of the research can be confirmed and sustained. There are multiple aspects to this. Accountability and verifiability are certainly crucial in a geopolitical context of widespread disconnection between expression, reference, and truth, and in a psychosocial context in which online anonymity appears to foster abuse and aggressivity. However, even if it is perhaps justifiably keyed into values of career development and inclusivity, the stress on individual accountability is also part of a neoliberal culture of individual ownership, risk avoidance and securitisation. In other words, there are multiple factors converging on AI, research ethics and integrity here, which are difficult to disentangle.

AC: This effort to disentangle the factors that contribute to how we perceive and assess new technologies entering the research space is not only worthwhile in terms of improving guidance and practices but also, more fundamentally, to make us re-assess taken for granted principles (and taken for granted by whom?). I would think that defining more responsible pathways to the integration of AI into research practices is one of the areas Humanities researchers are well equipped to contribute to with a competent critical voice. Can you tell us more about the second point you raised?

Pff: The second point is that the guidance insists that, and I quote: ‘the original intellectual contributions of any piece of scholarly work must (and can only) be created by human researchers.’ The guidance thus refers to ‘GAI-produced text’ and thus is the scope of the prescribed use of GAI. It bears repeating: ‘the original intellectual contribution of any piece of scholarly work must and can only be created by human researchers.’ Is the prescribed use of GAI thus limited to the production of prose? What is the ‘original intellectual contribution’? If this is, say, the ‘idea’, the implication is that the idea can be separated from its linguistic expression, which is a questionable assumption in some if not all disciplines in the Arts and Humanities. I would also suggest that this restriction is in continuity with the ethos of individual accountability I discussed before. And I would propose that if GAI is considered solely or prescribed solely as an expedient means of text production, its usage is unproblematic but without interest. If it is only, or only usable, as a text production, then it is something like a typewriter and not a research tool.

AC: I liked how you highlighted the narrow scope of the guidance document not so much to criticise its scope but to again question its assumptions, namely that GAI is used to produce prose in a mechanistic sense. Scholars of the material condition of text production (from manuscript and book studies) argue there is much more to the means of text production than a dump of words from one support to another; the horizon of your point was however far reaching beyond the material production of text as it went down to foreground the fallacy of dissecting ideas from their expressions as I’ll let you explain.

Pff: So, I do not think the ‘idea’ can be separated from its linguistic expression. I think the articulation in prose of an idea is an intrinsic element of the idea. Idea and prose are inextricably welded together. Moreover, it is hubris to propose that the ‘idea’ can be created by the human researcher alone. ‘Original research contributions’ arise through intersections and connections across networks not only of human researchers but also the networks made by human agents with technological machines and apparatuses of all kinds, one of which is language. The notion of the ‘original human researcher’ is a myth of a theological character.
If this is the case, I would suggest that the protection of the ‘original intellectual contribution’ looks like a defence mechanism in a conflict that is already lost. Ideas are already produced in a context of distributed connectivity across human and other networks.

AC: This is where the chord struck for me very loudly in harmony with a paper we co-authored with KDL’s founding Director (Smithies, ffrench and Ciula 2023). A superficial summary of that paper was indeed highlighting the importance of and, in fact, embracing the collaboration of humans (technical subjects by necessity) and machines; not as a way to escape any accountability and verifiability you mentioned above, but on the contrary to acknowledge those intersections and connections in specific contexts, such as KDL, and design techno-philosophical experiments benefitting from the awareness of a rich research ecology and culture. Where from here then?

Pff: From here it makes sense to me to think of GAI as an interlocutor with which ideas are generated in dialogue, while acknowledging that the dialogue one can have is conditioned by the relatively and perhaps only temporarily limited parameters of the AI. In response to a prompt, which voices a demand, the AI will aim to satisfy that demand by any means necessary. It will only ever produce a response within the bounds of probability. It will also not question the premise of the question. In the light of these very rapidly sketched notions one might say that the ‘original intellectual contribution’ emerges as the capacity to displace the focus from the generation of intelligence to the questioning of the question, the capacity to shift the paradigm.

AC: this is where the unpacking of principles and assumptions to affirm other principles we collectively as researchers and professionals are more in tune with (for example around distributed connectivity), can, in my opinion, also guide research practices which experiment with AI. Can we in KDL – operating at the intersection of AI/ML in the Arts and Humanities - shape experiments in such a way that they can be used to challenge theories? To question the question and, who knows, potentially to shift the paradigm? These are exciting times to continue working with technologies tearing down the myth of their separation from our research production in creative, critical and responsible ways.