Creative Interventions

With a group of such creative and engaged participants, it’s hardly surprising that dozens of fantastic ideas were flying around at the workshops. Unfortunately we couldn’t develop them all, but picked the four interventions the team thought most feasible to bring to prototype stage in the short amount of time we had. See summaries of each intervention below, including future plans and the iterations of Human Verses Machine which are being shown at the Book Festival.

1. Human Verses Machine

Human Verses Machine is a concept developed by Ray Interactive into an immersive experience that encompasses an interactive artwork and a short stage show.  

Informed by the WWAI workshops, this generative AI intervention asks the audience to step into a fictitious world that addresses key curiosities and concerns about Large Language Models (LLMs). 

The fictional LEX-9000 “Automatic Writing Machine” only slightly exaggerates the problematic incentives and behaviours of real AI products, including OpenAI’s ChatGPT model which powers LEX’s responses.

AI’s potential for good remains vast, but current bad incentives are scuppering that potential.

The interactive artwork created by Brendan McCarthy and Sam Healy appears at various locations during the Edinburgh International Book Festival. 

The stage performance, scripted and directed by WWAI participant Clare Duffy of digital theatre company Civic Digits, will appear before a panel discussion on the topic – Page Against The Machine: Writing in the Age of Artificial Intelligence – taking place in the EIBF Spiegeltent on 19th August 6pm-7pm.  

The performance will be followed by a panel featuring Pip and WWAI participants Camilla Grudova, Jan Rutherford and Burkhard Schafer, with time for audience Q&As.

More info and tickets here: https://www.edbookfest.co.uk/the-festival/whats-on/page-against-the-machine 

2. Renegade Pen test

Using the Text Masher to expose weaknesses in ChatGPT

The Renegade Pen Test intervention brought together several themes generated in the workshops revolving around how AI ‘copes’ with linguistic diversity and difference (and vice versa), and how historically the unique qualities of poetry have been harnessed in subversive and powerful ways such as espionage and code-breaking. Borrowing the Pen Test (Penetration Test) method from the field of cyber-security, the intervention sets out to test how non-normative ways of writing (e.g. minority dialects, neurodivergent forms of expression, or diasporic communities using creole, broken or colloquial English) can exploit and find vulnerabilities in AI systems such as the datasets on which Large Language Models (LLMs) train. The Renegade Pen Test thus ‘queers’ the traditional pen test, by giving power and agency to the (literal) pen of the writer to explore how non-normative language might evade the capture of quantifiable AI models and instead be a tool of power which might ultimately disrupt the commercial and creative greed of the large language models (LLMs) that fuel the GenAI industry. Workshop participants were encouraged to experiment with form and content; working both with AI tools (if desired), and against AI by discovering how ‘non-normative’ ways of writing challenge the computational logics of AI systems and datasets.

Led by Andrew and Pip, the workshop began with a discussion of how the qualities and nuances of poetry might pose a challenge to a computer passing the Turing Test, as considered by Alan Turing in his 1950 paper ‘Computing Machinery and Intelligence’. With the aid of a custom made ‘Text Masher’ tool (made by Ray Interactive), participants discussed and experimented with ways to confuse or subvert ChatGPT by injecting ‘linguistic bugs’ into the system, pushing the AI to ‘malfunction’ and thus revealing its fragility.

As well as discussion of the power of non-normative language, the intervention provoked some interesting ideas around the reproducibility and standardisation of digital text as a key factor in the decline of creative agency and thus the vulnerability of language to computational systems such as AI. Participants considered how unique characteristics of non-digital texts such as format, rubrics, illustrations, marginalia and original fonts are not currently considered by Large Language Models, and may be powerful tools in crystallising and visualising what AI cannot do.

UPDATE : the team are actively seeking funding to take this particular intervention forward.

3. X-Ray Specs (Ingredients Lists)

The idea for this intervention arose from workshop discussions around transparency and ethics in writing and publishing, and touched on questions raised by many participants about what actually constitutes AI when it comes to the writing process.

Participants imagined what it would mean to be able to see how much ‘AI’ goes into the writing of different texts, and to be able to decide whether or not to consume the product, much like the nutritional labels that appear on foodstuffs in a supermarket. Much discussion was had about what ingredients would put readers off a particular text, and also what levels of AI input would make it ethically acceptable to publish or read. If a novel was written with the help of ChatGPT, for example, would it have a ‘high sugar’ type warning on the label. And what would be the ‘palm-oil’ equivalent in literary terms, the ingredient or level of content that for many consumers would mean an outright boycott on a product for moral or ethical reasons?

WWAI team members Billy Dixon and Evan Morgan took the idea forward and speculatively designed a nutritional label for written work that could be filled in and stuck on the back cover of a book or other texts, thus affording the prospective consumer the information needed to make an informed decision on what texts they choose to ingest or engage with.

The X-Ray Specs intervention was tested and critiqued at the third WWAI workshop. Participants were invited to engage with the concept through a range of texts. On the table were: a novel, a magazine, a research paper, a school textbook and a poem, as well as a set of stickers, some empty and some partially filled in, with criteria for ‘Text Ingredients’. The author of the novel, The Spy by crime writer Ajay Chowdhury, has famously been open about how he used AI such as ChatGPT to help write the book, so was picked as an example, with a pre-filled Text Ingredients sticker attached to the rear cover.

The workshop provoked some fascinating discussion around the potential uses of such a tool by different stakeholders in the book industry, as well as a considerable degree of concern and pushback against the potentially problematic use or deployment of such an intervention. Participants also raised important issues around equality, diversity and inclusion, whereby the use of AI tools can create a more level playing field for writers with disabilities.

4. AI Folktales and Scottish Whispers

This intervention explores themes such as translation, provenance, and collective authorship by examining how AI transcribes different voices and translates the meaning of these prompts into images. It combines two ideas generated from the workshops. Firstly, the concept of an ‘AI rolling folktale’, an iterative narrative generated by AIs trained on different datasets (and thus with different and shifting cultural references), which could also be overwritten or edited according to the ever-changing rules and norms of the day. The second idea revolved around the concept of ‘Scottish whispers’, which explored the relationship between AI and dialect and how non-normative language is often misunderstood or mistranslated by AI.

Led by Francis and Savannah, workshop participants were asked to make up a collective story where each person makes up the next line of the story which will be dictated via the AI transcript services on Microsoft word. Each line acted as a prompt which is used as input into various Generative AI image generators to explore how these prompts are interpreted by large language models. 

The intervention revealed some interesting insights, especially around the human-centred (mis)interpretations of narratives, with human figures generated by prompts deliberately devoid of human characters. The team noted that this anthropocentric trait seemed to be a feature/bug relating specifically to Adobe Firefly then AI generally. It was also noted how the AI generators used struggled with cultural references such as mystical figures e.g. selkies, which the AI didn’t pick up on.