#8: The Nightmare World of AI Book Generators
This will get worse before it gets better.
Like many others, I was both horrified and fascinated to see the announcement of Sudowrite’s latest AI tool, Story Engine. It’s an app that automatically writes whole books for you. Another similar tool is Book in a Click.
The concept behind these tools is simple. You fill in a form, pick a genre, name your characters, define their goals, motives, and so on, set up all the parameters… then click Write. Within minutes, the AI language model will output a complete book. Fiction or non-fiction, short stories, whatever you want. If you don’t like the results, click Retry and it’ll output a different version.
There’s a lot of fine-tuning you can do from there, especially in the more advanced apps. Book in a Click advertises that it can generate anything up to 270 pages in one go, without repetition or plagiarism. Sudowrite suggests that the whole process of fine-tuning a book “in your style” might take a week.
A user review on the former’s website says: “I use Book in a Click to generate novels and very long blogposts (the ones that rank 1st place in Google).” This user’s byline identifies them as an “established author.” Meanwhile, Adi Robertson reviewed Sudowrite’s Story Engine for The Verge, and concluded:
“For all its shortcomings, Story Engine is strangely satisfying. If early AI writing felt like guiding a very well-read toddler into telling stories, Story Engine feels sort of like building a video game prototype. You start with an idea and try to get a computer to execute it. The results probably aren’t quite what you expect, but through trial and error, you can lean into what the system does well and find the fun.”
If you’re patient, you can also generate whole books with the free AI language model ChatGPT. In one of the AI groups I follow on social media, someone called John Pagett posted:
“I use ChatGPT to produce the book outline. Chapters, foreword, etc. I then build a table of contents. I then spend a period of time editing it. Then pop it on Kindle marketplace … For the books I’ve written, type my name in Kindle, you’ll find them … next week I should have a couple more to add.”
So I looked him up on Amazon.
At the time of my search, John Pagett had five self-published books to his name. (He added five more in the time it took me to write this article.) Pagett’s first book, AI Fundamentals: A Beginner’s Guide to Artificial Intelligence, released on 10 May 2023. That was followed on 18 May by Resolving Neighbour Disputes: A Comprehensive Guide for UK Residents. On 27 May he published two books for young children: The Boy Who Learned to Tell the Truth and Tobes Football Journey: From the Park to the Academy. On 9 June he branched into language guides, releasing Spanish in a Snap: Your Ultimate Beginner’s Guide.
These ebooks generally retail at $0.99, and nowhere in any of the descriptions could I find mention of the fact that they were written by ChatGPT.
However, in the world of AI publishing, it seems that John Pagett is a comparatively small fish. I would later stumble across Daniel D. Lee: whose enviable writing output amounts to a total of 130 books in just a matter of months.
These range from wilderness survival to cooking guides, to biographies of tech and political leaders, to “blockchain philosophy,” to sci-fi detective thrillers… On Twitter, Daniel D. Lee describes himself as a “bestselling author,” while his Amazon bio claims:
“Each book is meticulously crafted, integrating innovative technology with traditional artistry, to create visually stunning and intellectually stimulating collector’s items” … “the future of publishing.”
None of the book descriptions that I saw mentioned if they had been created by AI. However, with Lee’s ebooks often releasing no more than days apart – and sometimes, multiple books the same day – the author would presumably not have time even to read them himself, let alone edit them.
(That “bestseller” claim might not be entirely unfounded, by the way. The Amazon system awards bestseller badges in each genre category, so that one way to game it is by tagging your book with the most obscure categories you can find from their list of some 16,000 options. An ebook with just one sale will never become a bestseller in science fiction, for example, but it may have a better chance in Amish romance or cryptopunk memoir. And I only made up one of those.)
There is something uncanny – and perhaps self-congratulatory – about an AI writing a book titled How AI and Blockchain Saved the World from Economic Apocalypse… though apparently it’s a great read. That’s what Amazon reviewer Dr Phillip Davies said, in his glowing 5-star review. He called it an “exceptional book,” and said that author Daniel D. Lee’s “passion for the subject matter shines through in every page, making it an enjoyable and thought-provoking read.”
Before you ask, No – this review did not pass an AI content check. It scored “99.9% probability for AI” according to the AI content detector Copy Leaks.
The same Dr Phillip Davies also gave 5-star scores and long, gushing reviews to many of Lee’s other books, suggesting that this academic has done little since ChatGPT launched other than reading and reviewing the many works of Daniel D. Lee.
Here’s a question: Pagett and Lee, do we call such people “authors”?
In that AI group on social media, I saw someone making the Michelangelo argument. They said that Michelangelo worked with a whole team of artists under him, so that often he only provided the vision, and instruction, rather than working with his own hands. Or look at fashion studios, they said, where the name on the label is rarely the same person who held the scissors, needle and thread.
In principle, I can accept that sort of argument. AI is essentially just a tool, and it can be an incredibly useful one. I have been using it myself this year: for structuring pieces of writing and creating draft layouts, translating documents, and transcribing audio to text.
I am yet to see AI-generated text of a high enough quality that I would be proud to attach my name to it; but neither am I naive enough to believe that that day isn’t coming. However – I will always have one thing which these AI language learning models do not: knowledge of things not already on the internet.
And that brings me to my conclusion here…
The Problem with AI-Generated Books
1. They add no value
What is the point of a book? When I write a text, I try to put things into it that have not been said before. That might be my own description of a place, or quotes from a conversation had, in the real world; or perhaps I believe I have an original take on a familiar topic. Either way, in these cases the text (hopefully) has value in that it moves the conversation forwards.
By their very nature, AI language learning models like ChatGPT are incapable of originality. They cannot have new ideas, reason, or make new observations, they can only rehash the material already included in their training data. In non-fiction this leads to books with no original value to offer. In fiction, the current AI tools may only tell the most generic of stories – an average product of all the stories that came before. (And as a result I am not entirely unconvinced that several Hollywood studios may already be using them to write their TV shows.)
2. They lack accuracy and accountability
One of the first things I found during my experiments with ChatGPT is that AI lies. When I asked it to help me write a historical article, it output some perfectly functional text complete with dates and references that all looked very convincing. However, on closer inspection I realised the dates were wrong. I don’t know whether ChatGPT invented dates that it couldn’t find, in order to still deliver the style of article that I had asked for; or else, if it had been trained on data that already contained the errors. Either way, it showed that the model prioritised output over accuracy (or what we humans might call honesty).
The lack of moral logic in these models can have deeper implications too. Sometimes AI can tune into and magnify humanity’s own ills – for example, research has shown AI demonstrating discriminatory behaviour such as racial and gender biases.
In other cases, the lack of human oversight in AI-produced texts could potentially have lethal effect. The Guardian reported a case where a supermarket AI meal planner app suggested recipes for chlorine gas and “mosquito-repellent roast potatoes.” Okay, but, no one is really stupid enough to eat that, you might say. However, scale this up to the level of countless “AI authors,” many with fake professional credentials, and each putting out one full guidebook per week into the Amazon store…
A single human might be immoral, or wrong, but they are usually consistent at it, which makes it possible to challenge or report them. It makes them accountable. Humans are also slow to produce work.
These book-in-a-click tools have only been around for a few months now, and in terms of output, I think we’re still looking at the tip of the iceberg. As @heyMAKWA wrote on Twitter, in response to their discovery of AI-generated foraging guides that recommended the consumption of deadly fungi:
“AI clods have taken the world’s biggest collection of shared human knowledge and turned it into a fucking landfill almost overnight.”
3. They make life harder for actual writers
Getting a book published the traditional way is really bloody difficult. I decided I wanted to be an author when I was 7, but it took me a full three decades (and several complete manuscripts) before I actually got a book deal. In comparison, my second book was agreed with the publisher just months after my first one proved itself a success.
For as-yet unpublished authors, I can see the rise of AI book generators making it harder than ever to get a foot on that ladder.
Earlier this year, the sci-fi magazine Clarkesworld announced it was closing all submissions, after its editorial team was bombarded with hundreds of AI-generated story pitches. Founding editor Neil Clarke told The Guardian that these AI authors were “largely driven in by ‘side hustle’ experts making claims of easy money with ChatGPT.” Clarkesworld is no longer accepting submissions now until further notice, and similar things are happening across the industry.
If the gatekeepers to traditional publishing are closing their doors, that always leaves self-publishing – but good luck trying to get your ebook noticed by new readers in the Kindle store, when your competitors are each popping out a new book every day.
This bubble will probably burst before long. I wonder how much someone like Daniel D. Lee actually earns from his 130 ebooks on Amazon… Very few seem to have any reviews or ratings other than those he wrote himself. I can’t imagine the earnings are really enough to justify all the effort.
But then, even as this first wave of AI-generated books begins to ebb, the technology will have been constantly improving… and the next wave will be stronger, more convincing.
Many writers have already lost their jobs to AI. Many more will follow. There are also new types of problems emerging – for example, the case of Jane Friedman, a writer whose name was borrowed by an AI author who then started flooding the market with books that Friedman did not write.
This will get worse before it gets better.
In the end though, a lot of power rests with the reader. The market will decide the fate of AI-generated books; if human authors are to keep their jobs in the future, it will only be because readers decided they preferred to read human writing. The big challenge is going to be knowing how to tell the difference.
Also, a court in America has just ruled that nothing created by AI can be copyrighted, so once you've created something, you can't stop people stealing it.
I can't even imagine the feeling of empty triumph you'd feel after "publishing" a book which was written by a machine.
I don't care how much guidance and input you can give an AI, that just isn't writing.