Using AI to create anything is a socially and psychologically complex activity these days. Whether it’s a work email, a blog post, a personal letter, a school essay, ad copy, an image, or a story, there are plenty of people willing to forcefully assert their opinions that such content is cheating, not art, not creative, or stealing content from “real” creatives. Many people struggle with whether or how to disclose that they used AI, and others want to detect when content was AI-generated, for various reasons.
I’ll delve into all these issues, but it’s going to take more than one blog post. I’ll cover much of this in the category AI Art, since text-to-image AI is ahead of AI for generating creative fiction. Lawsuits about AI-generated visual art and AI-generated code are underway, and the outcomes of these suits will shape how AI-generated content of all types enters our world. In this post, I’ll focus on the narrower topic of using AI to write fiction, and how I plan to approach the issue in my own work.
Today’s LLM-based AI tools, like ChatGPT, are great at explaining writing concepts, but terrible at implementing them. I used ChatGPT (GPT-3.5 and GPT-4) extensively when writing Bitstreams of Hope. The primary use cases, which I explore in detail below, were research and brainstorming my way out of plot holes. None of the actual text of the novel was generated by AI. One reason for this is that every “story” that ChatGPT generates is incredibly cliched. It can helpfully discuss the ways my own writing might or might not follow best practices for good fiction, but it can’t write good fiction itself. The other reason to exclude AI-generated text from my novels is because it might threaten my sense of authorship and jeopardize acceptance of my books among certain readers.
I’ll start by explaining exactly how I used ChatGPT, then put this in context with how we use other tools. The research use case is the simplest. There were several things I needed to learn about in order to write the book. For centuries, authors have needed to do research on such topics. Before computers, authors read books, gained real world experience, or talked to others to learn about aspects of our world. Since the internet’s been available, much of this research has moved online. It’s easy to use Google to learn about previously unfamiliar technologies and societal institutions, or to use Google street view to get a sense of a place an author hasn’t visited personally. For the most part, I don’t think people feel like authors are cheating when they use these more expedient ways to gain knowledge. I’m sure some people think only the slow, traditional methods lead to “genuine” learning, but I’m guessing this attitude is a minority opinion. Note that I’m talking about peripheral aspects of the story. Of course, the main themes of a book must be understood at a deep level, and there are no shortcuts to deep understanding. But for minor details, I view my use of ChatGPT for research as simply a better, upgraded tool, a more accessible and efficient interface to the internet. In this way, I don’t feel any guilt or discomfort over using AI to help with research.
The “brainstorming about plot holes” use case is more problematic. If AI is coming up with the ideas that drive the plot, doesn’t that threaten my sense of authorship? Should I disclose that I used AI in this way, and maybe even list ChatGPT as a co-author? For now, the answer to these questions is no, because of the limited nature of the help I received. One thing ChatGPT is good at is listing alternatives, and this includes brainstorming ways the plot or character development could go. However, it pretty much never came up with exactly the right approach. Most often, it either failed completely or just pointed me in a direction that I ultimately carried to fruition on my own. In this way, it felt as if I had bounced ideas off of a friend or family member, and sometimes they sent me in a promising direction. I wouldn’t expect most authors to consider any of their human brainstorming buddies as co-authors (except in extreme cases), and I don’t do so for my machine brainstorming buddy.
The question of authorship has two aspects: how I feel about it and how my readers might feel about it. Many people seem intent on carefully differentiating human-created and AI-created content, often for the purpose of disqualifying AI-generated stuff from their site, contest, or personal reading list. I think the audience of people who’re interested in science fiction about AI will be more forgiving of AI-generated content, as long as it’s high quality. But I’m not sure about this, and I’ll cautiously sample the zeitgeist before I consider publishing any content that has enough AI-generated content that I feel compelled to list it as a coauthor.
How I personally feel about AI as a co-author is complicated by feelings that I need to be the sole author in order to get full credit, from which I derive a sense of self-worth and judge my contributions to society. This is arguably a character defect, which I’ve been working through as I’ve discovered that putting a novel out is a much more collaborative project than I realized. Without multiple editors, a cover designer, and a team of beta readers, my books probably wouldn’t be worth reading. If AI is someday capable of replacing these human collaborators, would using it make me feel more or less like the book was my sole accomplishment? Would this be desirable or undesirable? What’s so important about receiving sole credit? Is collaboration worthwhile in its own right, regardless of how necessary it is in the creation process? Is collaboration with an AI valid or invalid? Why? What does it mean to create something anyway? Isn’t every new idea simply a remix of things we experience in our lives? Why do we even feel the right to claim that “original” ideas belong to us, when they bubble up from our unconscious minds in a way we don’t even understand, let alone control? You can see how this topic quickly becomes a confusing, entangled mess. I’ll keep blogging about what happens as my writing life unfolds.
My plan is to be an early experimenter with AI tech designed for writers, and I expect significant advancements to occur in the next five years. What I’ll do is try the tech on some side project, just to see what it’s capable of and how I feel about it. If something threatens my sense of authorship, I’ll make a judgment at that time about how to use it, if at all, depending on where I’m at personally and where society is in its acceptance of AI-generated stories.
Combining the research and brainstorming use cases, I think there’s lots of room for AI systems that can do much more, yet still be regarded as simple tools that don’t deserve credit for being part of the creative process itself. We supplement our physical bodies all the time without feeling guilty that we’re cheating or being inauthentic. We don’t feel bad that we can’t lift steel beams onto a skyscraper with only our muscles and can’t run down the highway at 75 mph. We don’t feel the need to disclose that we used a car, word processing program, spellchecker, calendar with reminders, or a can opener in our daily lives. Why should we feel guilty about using AI tools that augment our mental capabilities in more sophisticated ways, like grammar checkers, tone checkers, or fact checkers?
Revising a manuscript is a very difficult, labor intensive process, and I’d love to have more help with it. Consistency is a great example. First, I’d like to be able to ask an AI, “Is this character’s attitude towards AI consistent throughout the story?” and have it point out any cases where the character was inexplicably different from their baseline. When I’m considering a change to the plot, it would be fantastic if I could get an immediate answer about all the ripple effects the change would have, rather than relying on my memory or a painstaking and error-prone manual search. Then, if I decide to go ahead with the change, if the AI could implement it for me, asking me to make any tricky decisions, that would be a huge time saver. If I’m the one in control, the AI feels like a helpful tool that could greatly increase my output, not like a co-author. However, once the AI is generating text directly in the manuscript, I think this crosses a line. It feels potentially like cheating, whatever the hell that means. I also worry about atrophy of my mental capabilities. However, I think the question of atrophy is a false worry, because we won’t turn into automatons mindlessly selecting among options AI presents to us. We’ll simply use our own minds in different ways, typically moving up the ladder of abstraction to the higher-level, more important, and more interesting ideas that drive our creative expression, while leaving the drudgery to machines. I think mental augmentation by AI is a very complex topic, and I’ll let you know how it goes as more advanced tools become available.
One reason people resist AI-generated content is because they fear it’ll be good enough and plentiful enough that human-created content will be drowned out. If creatives lost the ability to express themselves, and art consumers lost the ability to connect with human artists, this would be a disaster for us. But is this the only effect generative AI will have? What about the average person, who doesn’t have the persistence to gain skill and mastery in a craft, but has a lot of feelings to express? Do you really have the right to say their expressive output is invalid because they got more help than a traditionally trained artist or writer?
Why is it so important that creation be difficult to be judged as authentic or worthy? What I’ve learned about writing sheds light on this. Fictional characters seem inauthentic if things are too easy for them. Being human is difficult, and we want to read about characters who struggle like we do. Similarly, we marvel at artists and writers who can do things we can’t do as average people. Their output is more admirable because they struggled to make it. But is this the only reason we value other people’s expressions? What if someone has something interesting or relatable to say, but needs AI help to say it? Can’t we still connect with them?
I’ll delve more into these thoughts in the series of posts on AI art, especially the legal and economic concerns that will drive short-term policy changes. But I hope AI assistance will end up being a net good for humanity. If more people can express themselves more easily, this could result in greater empathy and social cohesion, as well as an abundance of really awesome visual and written art. Is the world ready for this much awesome? I guess we’ll find out.