Artificial Intelligence in Creativity
It is a truth universally acknowledged that a conference attendee with the keys to the corporate Twitter account will be in need of Wi-Fi. In the splendid setting of the Assembly Rooms, I was unable to keep the feed fed. This was the cause of some frustration until I adopted a more stoic perspective and concentrated instead on the substance of the discussions**,** rather than trying to reduce them to pithy soundbites in a vainglorious attempt to win over the coveted Followers.
The UKRI sponsored Beyond Conference is now in its second year and was hosted in a pleasingly festive Edinburgh. Spread over a day and a half**,** it comprised commercial exhibitors, poster sessions, artistic interventions and chaired debates, bringing research and the creative industries together.
The connecting theme this year was the use of Artificial Intelligence in all walks of life; a well-trodden topic but viewed through the lens of the creative’s mindset. My own contribution was a dose of scepticism about the transformative claims being made and a curmudgeon’s reluctance to acknowledge the value of machines in creative endeavours. Regardless, I did my best to listen to the discussions with Beginner’s mind.
Arriving early, I ensure I am at the front of the queue for exhibitor freebies, being a sucker for any merchandise which can be repurposed as Christmas gifts (‘Thank you for my branded memory stick, how did you know?’). Although freebies were in short supply, there was a good variety of demonstratoons showing everything from holographic displays to data exploration platforms in VR. A recreation of Glasgow School of Art’s Renee Mackintosh building caught my eye. Twice gutted by fire, the library had been reimagined for VR with the emphasis on giving a true sense of space and presence, fore**-**fronting its original artistic purpose. The Infinite Hotel, a VR game riffing on the eponymous thought experiment, also had vacancies. Impressive projects all but the business end of the conference was taking place in the main hall.
It isn’t unreasonable, given the setting, that the focus should fall predominantly on monetizing AI, though it is a little disappointing to see that the much repeated promise to transform our lives is so often equated with “selling more stuff to more people”. It might be interesting instead to leverage that raw power and cold logic toward the task of warning consumers about purchases they will regret or which won’t bring them the enduring retail therapy they are looking for.
One session presented visions of using AI to refine or augment the perceived chore of storytelling. For instance, could we use AI to assess the likelihood that a screenplay will mature into a profitable movie? The answer is “probably yes” but a further question is, do we really want to? The Shawshank Redemption was a relative flop at the box office. Some of the big studios’ worst commissions have added to the canon of cult cinema and it is far richer for their errors. In spite of assurances that the training models for this task don’t favour profitability over (subjective) artistic merit, I suspect that loops of positive feedback will gradually reinforce recommendations about the character of protagonists, pace and setting**,** etc. Leaping forward a human generation**,** would consumer tastes also have been retrained according to these narrow criteria? Anyone who survived the eighties knows how formulae can be abused to mass-produce disposable art – though the triumvirate of Stock, Aitken and Waterman did possess human souls, to the best of my knowledge.
There was a lot of discussion concerning bias in networks and in training datasets. One contrary slant on this is that AI presents an opportunity to weed out faulty human heuristics. Any reader of Daniel Kahneman will be able to list examples of decision bias resulting from fatigue, flawed baselines, subtle priming and anchoring, and the need to create a credible narrative to explain a mundane observation.
The artist Jake Elwes provided some compelling examples**,** highlighting the biases of training data in his work whilst exercising careful creative control over his process. For example, a facial recognition dataset was queered through the insertion of portraits of drag artists. My personal favourite from his works flipped the human-led dialogue with the machine into reverse and asked the computer what it understood by the term ‘marsh bird’. What emerges from the trained neural network when this question is asked are images of birds that never existed. They are clearly marsh birds but plucked from Plato’s realm of ideas; a distillation of what it is to be a marsh bird. Imagine if a vacuum cleaner was switched from suck to blow and what emerged wasn’t dust, but the concept of dust (I do the work a disservice with this clunky comparison, but I couldn’t resist the analogy!).
So, we can teach our computers about colour theory, composition, clarity of line, harmony and discord and ask them to mimic our tastes, but we can’t attribute motivation to their output. A computer will never try to make sense of a broken heart or warn us about our mortal vanity. It isn’t concerned with the cadence of birdsong beyond its algorithmically identified capacity to resonate with a human listener. I realise I’m conflating Creativity with Art when I suggest that art created entirely by an AI is harder to value as it comes with no intrinsic motivation.
Of course it isn’t generally being suggested that AI represents an autonomous creator that one can commission to pen the next bestseller. More often, AI is being framed as an assistant that streamlines the creative process or removes drudgery. Several speakers noted the ability of AI to break them out of the routine of their practice, which then helped them to remix their oeuvre. Whilst this is an intriguing solution to writers’ block, can it really be any more creative than throwing a pack of cards into the air to shuffle them? How much control over the creative process is being exercised, or rather, signed over?
The other big question being addressed was the lack of legislation around how AI was being used or might eventually be used. What aspects of AI currently being touted as convenient timesavers will ultimately require compulsory buy-in from the public? My colleague Dr. Claire Reddleman says that a useful tool for thinking about these questions is to ask “how will the least powerful person in this interaction be affected?”. I’m thinking specifically of examples from the gig economy, where a delivery driver might be expected to have their face identified or recorded by a smart doorbell. So much has come to light over the last fifteen years in the age of social media about how the connected world can be manipulated by a few huge commercial entities that we should have better foresight – one hopes.
Democratising the power of clever machines isn’t just about transparency and protective legislation though. These technologies are making their way into the mainstream but what use is a tool that only a gifted elite can make use of? It’s complicated stuff and the person-in-the-street risks feeling ever more disconnected from processes that will eventually govern their day-to-day experiences. This technical barrier to entry is something that I think we should strive to overcome. Hopefully the school IT curriculum will keep pace, though it only recently moved on from ICT lessons being Microsoft Office training sessions.
At times I found it hard to determine a clear line (if there was one) between what was being mooted as the ‘next big thing’, and dire warnings about ‘where we are heading’. Our expectations have to be tempered with serious thought about how, and under what conditions**,** we allow clever machines into our domestic and professional lives and to what extent we are comfortable outsourcing parts of our creative processes to fake minds that will never appreciate the kitsch genius of Plan 9 from Outer Space. Regarding the more pervasive implementations of AI, might we now be better prepared, in the shadow of Cambridge Analytica, to consider the implications of whole-heartedly embracing these promises or are the implications still too foggy to discern?
Reader, it worries me.