Kester Brewin 

Why I wrote an AI transparency statement for my book, and think other authors should too

Until we have a mechanism to test for artificial intelligence, writers need a tool to maintain trust in their work. So I decided to be completely open with my readers
  
  

Did it help and if so, admit to it … ChatGPT.
Did it help and if so, admit to it … ChatGPT. Photograph: NurPhoto/Getty Images

‘Where do you get the time?” For many years, when I’d announce to friends that I had another book coming out, I’d take responses like this as a badge of pride.

These past few months, while publicising my new book about AI, God-Like, I’ve tried not to hear in those same words an undertone of accusation: “Where do you get the time?” Meaning, you must have had help from ChatGPT, right?

The truth is, it is becoming harder and harder to resist help from AI. My word processor now offers to have a go at the next paragraph, or tidy up the one I’ve just written.

My work – for a research charity exploring the impacts of AI on the UK labour market – means that I read daily about the profound implications of this technological revolution on almost every occupation. In the creative industries, the impact is already enormous.

This was why, having finished the book, I decided that my friends were right: I did need to face the inevitable question head-on and offer full disclosure. I needed an AI transparency statement, to be printed at the start of my book.

I searched the internet, thinking that I’d be able to find a template. Finding nothing, I had to come up with one myself.

I decided on four dimensions that needed covering.

First, has any text been generated using AI?

Second, has any text been improved using AI? This might include an AI system like Grammarly offering suggestions to reorder sentences or words to increase a clarity score.

Third, has any text been suggested using AI? This might include asking ChatGPT for an outline, or having the next paragraph drafted based on previous text.

Fourth, has the text been corrected using AI and – if so – have suggestions for spelling and grammar been accepted or rejected based on human discretion?

For my own book, the answers were 1: No, 2: No, 3: No and 4: Yes – but with manual decisions about which spelling and grammar changes to accept or reject. Imperfect, I’m sure, but I offer my four-part statement as something to be built on and improved, perhaps towards a Creative Commons-style standard.

I wanted to include it as a means of promoting open and honest discussion about which tools people are using, partly because research shows that a lot of generative AI use is hidden. With work constantly intensifying, people are wary of admitting to bosses or colleagues that they’re using tools that allow them to speed up certain tasks and steal back a little breathing space in the process … some time for recreation, perhaps. To be more creative. If, as Elon Musk claims, AI will one day “solve” work and liberate us to flourish and create, we ought to start being open about how and where that is happening now.

But, as a writer who cares about my craft, I also wanted to include the AI transparency statement because of a meeting that left me with deep concerns. I had arranged a coffee with someone who worked for an organisation that hosts writing workshops and retreats. I asked them what thoughts they’d had about how to respond to the spectre of generative AI. “Oh,” they said, “we don’t think that we need to worry about that.”

I think that we do. Until we have some mechanism by which we can test for AI – and that will be extraordinarily difficult – we at least need a means by which writers build trust in their work by being transparent about the tools they have used.

And, to be clear, these tools are wonderful, and can be spurs for co-creation. Way back in August 2021, Vauhini Vara published a piece in the Believer in which she used an early version of ChatGPT to help her write a profound, rich and highly original piece about her sister’s death. Vara’s transparency statement would come out different to mine, but this wouldn’t be to devalue her work in comparison – far from it. It would open up a new vein of creative possibilities.

When we invest in reading a book we are entering a trust relationship with the writer. That a small crew of tech bosses have squandered the Promethean act and freely given away the gift of language to machines profoundly undermines that historic trust. I have no doubt that an AI will soon “write” a marvellous book – but should anyone care? There will be weak applause. Like a flawless, lab-grown diamond it will be artifice, but not art, a trick with minor value.

But in this new reality, it will be up to writers to establish trust in the provenance of their own gems by being transparent about their labour to mine them. Pretending that writing is too honourable a craft to worry about trust is, I believe, naive.

As I outline in my book, AI is – like the atomic bomb – a vastly powerful human creation that we have no choice now but to learn to survive alongside. Being open about what is in our arsenal is one small step to preventing a writing arms race that can only lead to distrust and division.

God-Like: A 500-Year History of Artificial Intelligence in Myths, Machines, Monsters by Kester Brewin is published by Vaux Books

 

Leave a Comment

Required fields are marked *

*

*