Back in 2022, when ChatGPT arrived, I was part of the first wave of users. Delighted but also a little unsure of what I would do with it, I asked the system to generate all sorts of random things. A song about George Floyd in the style of Bob Dylan. A menu for a vegetarian dinner. A briefing paper on alternative transportation technologies.
The quality of what she produced varied, but she highlighted something that is even more evident today than it was then. That this technology was not just a toy. On the contrary, his arrival constitutes an inflection point in the history of humanity. In the years and decades to come, AI will transform every aspect of our lives.
But we are also at an inflection point for those of us who make a living from words, and for everyone who works in the creative arts. Whether you’re a writer, actor, singer, filmmaker, painter or photographer, a machine can now do what you do, instantly and for a fraction of the cost. Maybe he can’t do it as well as you yet, but like the Tyrannosaurus rex in the rearview mirror in the original Jurassic Parkit gets to you, and quickly.
Faced with the idea of machines capable of doing everything humans can do, some have simply given up. Lee Sedol, the Go Grandmaster who was defeated by DeepMind’s AlphaGo system in 2016, retired immediately, declaring that AlphaGo was “an entity that could not be beaten”and that his “the whole world was collapsing”.
Others have asserted the innate superiority of art created by humans, effectively circulating the idea that there is something about the things we create that cannot be replicated by technology. In the words of Nick Cave:
Songs are born from suffering… from the internal and complex human struggle of creation… (but) algorithms don’t feel. Data does not suffer… What makes a good song great is not its close resemblance to a recognizable work. Writing a good song is not mimicry, nor a replication, nor a pastiche, it is the opposite. It is an act of self-harm that destroys everything one has strived to produce in the past.
It’s an attractive position and one I’d like to believe in – but unfortunately, I don’t. Because not only does this commit us to a hopelessly simplistic – and frankly reactionary – binary system in which the human is intrinsically good and the artificial is intrinsically evil, but it also means that the category of creation we are defending is extremely small. Do we really want to limit the work we enjoy to these imposing works of art, the product of deep feeling? What about costume design, illustration, book reviews, and all the other things people do? Don’t they matter?
Perhaps it would be better to start defending human creativity in the creative process itself. Because when we make something, the end product isn’t the only thing that matters. In fact, it might not even be the thing that matters most. There is also value in the act of making, in craftsmanship and in the care taken in it. This value is not inherent in the things we make, but in the creative work that goes into making them. The interaction between our minds, our bodies, and what we create is what brings something new – a certain understanding or presence – to the world. But the act of creating also changes us. It can be joyful, but at other times it can be frustrating or even painful. Still, it enriches us in a way that simply tricking a machine into generating something for us never will.
What is happening here is not to give free rein to our imagination, but to externalize it. Generative AI removes part of what makes us human and hands it over to a company so they can sell us a product that claims to do the same thing. In other words, the real goal of these systems is not liberation, but profit. Forget glib marketing slogans about increasing productivity or unlocking our potential. These systems are not designed to benefit us as individuals or as a society. They are designed to maximize the ability of tech companies to extract value by destroying the industries they disrupt.
This reality is particularly striking in the creative industries. Because the ability of AI systems to create stories, images and videos didn’t come out of nowhere. To be able to do these things, AIs must be trained on enormous amounts of data. These datasets are generated from publicly available information: books, articles, Wikipedia entries, etc. in the case of the text; videos and images in the case of visual data.
The exact nature of these works is already very controversial. Some, like Wikipedia and non-copyrighted books, are in the public domain. But much of it – and perhaps most of it – is not. How was ChatGPT able to write a song about George Floyd in the style of Bob Dylan without access to Dylan’s songs? The answer is that it is not possible. He could only imitate Dylan because his lyrics were part of the data set used for his training.
Between the secrecy of these companies and the fact that the systems themselves are effectively black boxes, whose internal processes are opaque even to their creators, it’s difficult to know exactly what was ingested by an individual AI. What we are sure of is that large quantities of copyrighted material has already been introduced into these systemsand is still being introduced there as we speak, all without authorization or payment.
But AI is not content with gradually eroding the rights of authors and other creators. These technologies are designed to completely replace creative workers. Writer and artist James Bridle compared this process to the commons enclosurebut any way you cut it, what we are witnessing is not just “systematic and massive theft”it is the willful and deliberate destruction of entire industries and the transfer of their value to Silicon Valley shareholders.
after newsletter promotion
This unbridled rapacity is not new. Despite advertising campaigns promising care and connection, the entire model of the tech industry depends on extraction and exploitation. From publishing to transportation, tech companies have employed a model that depends on inserting themselves into traditional industries and “disrupting” them by skirting regulations and trampling on hard-won rights or simply shutting down things that were formerly part of the public sphere. In the same way that Google harvested creative works to create its libraries, file sharing technologies devastated the music industry, and Uber’s model relies on paying its drivers less than taxi companies , AI maximizes its profits by refusing to pay the creators of the hardware it relies on. .
Meanwhile the human, environmental And social costs of these technologies are carefully kept out of sight.
Interestingly, the sense of helplessness and paralysis many of us feel in the face of the social and cultural transformation unleashed by AI resembles our inability to respond to climate change. I don’t think it’s a coincidence. In both cases, there is a profound disconnect between the scale of what is happening and our ability to conceptualize it. We have a hard time imagining fundamental change, and when faced with it, we tend to either panic or simply shut down.
But it is also because, as with climate change, we have been led to think that there is no alternative and that the economic systems in which we live are natural, and that discussing with them has just as much makes sense than arguing with the wind.
In fact, it’s the opposite. Companies like Meta and Alphabet and, more recently, OpenAI, have only achieved their extraordinary wealth and power through very specific regulatory and economic conditions. These provisions may be modified. This is within the power of government and we should insist on it. There are currently cases before the courts in a number of jurisdictions that seek to deem the mass expropriation of the works of artists and writers by AI companies as copyright infringement. The outcome of these cases is not yet clear, but even if the creators lose, this fight is not over. The use of our work to train AI must be protected by the copyright system.
And we must not stop there. We should insist on paying for work that was used, paying for any future use, and ending the tech industry practice of taking first and asking for forgiveness later. Their use of copyrighted material without permission was not accidental. They did it on purpose because they thought they could get away with it. The time has come for them to stop getting away with it.
For this to happen, we need regulatory structures that ensure transparency about what data sets are used to train these systems and what is contained in those data sets. And auditing systems to ensure that copyrights and other forms of intellectual property are not violated and that impose significant penalties if they are. And we must insist on international agreements that protect the rights of artists and other creators instead of facilitating corporate profits.
But most importantly, we need to think seriously about why what we do as human beings, and as creators and artists in particular, matters. Because it is not enough to worry about what is lost, nor to lead a rearguard action against these technologies. We need to start making positive arguments for the value of what we do, and creativity in general, and think about what that might look like in a world where AI is an omnipresent reality.
-
This is an edited version of the Australian Society of Authors’ 2024 Colin Simpson Memorial Keynote Lecture, titled ‘Creative Futures: Imagining a place for creativity in a world of artificial intelligence’.
