Generative AI In Art - Public Perception, Critical Scrutiny, and Philosophical Discussion
INTRODUCTION - THE EFFECT OF AI ON ART CRITIQUE
In the course of my research I recently read Spectrum of creative agencies in AI-based art: analysis of art reviews (Loivaranta, Hautala, and Lundman 2025). As the title suggests the article is an analysis of 39 reviews of art that incorporates generative AI in some fashion. Loivaranta, Hautala, and Lundman place the artworks on a spectrum from human-centered agency to AI-centered agency; other permutations of these extremes fill the space between. Starting from the human side there is human-centered Al-assisted co-agency, whereby the human actor received mild assistance from the AI actor, human-AI co-agency, where the AI and human artist are equally responsible for the creative output of the artwork, Al-centered enchanted agency where human agency amounts to setting up an automated system in which the AI operates within specified parameters to create an artwork. Improvisational agency and assemblage distributed agency are more nebulous, indicating situations where the creative authorship is less clear or spread between multiple actors. As the name suggests improvisational agency refers to a live performance where the two actors, human and AI, feed off of one another in a sort of creative feedback. In this instance it is unclear from where the creative agency stems. Assemblage distributed agency suggests co-agency is distributed across several actors input, the example given in the article describes an artwork created using MRI brain scans that are then fed into a generative AI to create new works. In this situation there are several actors; artist Pierre Hyughe collaborating with neuroscientist Yukiyasu Kamitani, using brain scans of numerous participants, culminating with a remix of imagery courtesy of the generative AI.
The categorisation of these artworks creates clear parameters from which to judge creative output of the artist and AI respectively, in order to properly analyse the reviews of these works. An interesting pattern emerged from this analysis:
“When the outcomes are evaluated as creative or of good quality, the focus is often on the human creative agency, considering AI as a tool. When the outcomes are considered uncreative or of lesser quality, it is often AI that is considered as the central creative agency, which then becomes empty of intention, creativity and meaning” (Loivaranta, Hautala, and Lundman 2025).
This human centric bias is hardly surprising, and I suspect in several instances the presence of AI generated material in the artwork legitimately compromised the emotional and creative integrity of the work. That said, this clear bias in favour of human creativity, I would argue, stems from a general contempt towards AI generated content, and is at least somewhat removed from simple aesthetic considerations.
This article got me thinking; what is it about art that humans find so entrancing? And relatedly, when artworks are produced through a process whereby human agency is reduced, why do humans balk at the resulting work? I believe there are several root causes for this kind of reactionary thinking. Firstly, generative AI is seen as a threat to creative industries and the livelihood of those working in them, so some degree of suspicion is to be expected. Secondly, the consumption and appreciation of art is wrapped up in romanticism and the idea that artworks are a divine manifestation channeled through a human vessel - the artist - which generative AI can’t replicate. And thirdly, AI generated content is viewed as copyright infringement, and a shortcut or cheat, copying the artwork of others. Copyright infringement will no doubt dictate how generative AI technology will be controlled and deployed by the corporations that fund its advancement. AI generated content also elicits a kind of technophobia - the machinations of the generated content is so far beyond our comprehension that the resultant feeling is one of fear. In this paper I hope to address some of these concerns, exploring negative human reactions to AI generated content and look at where the technology, and our collective opinion of it, currently stands.
THE S CURVE - WHERE WE ARE ON THE TIMELINE
We now (July 2025) exist in a world replete with AI generated imagery. While AI generated music may still be developing (at an alarming rate I might add), AI generated imagery, both still images and video, have reached a level of perfection that, even two years ago, didn’t seem possible (Knight 2023). This is particularly true of AI generated video which, at a glance, is real enough to be believed. Image generation seems to have hit the plateau of the logarithmic curve in terms of visual fidelity. Witnessing in real time the advancement of AI generated imagery along an S curve (the transition from exponential growth to that of logarithmic growth). While Large Language Models are, in many ways, still in their infancy, in a few short years image generation has advanced from blurry fever dream to high definition perfection.
As the valve turns on AI imagery the jet force cascade of generated content fills our world at a rate one might consider to be catastrophic, or at the very least portentous. It’s easy to be emotionally affected by these advancements, to experience concern at the ethical and societal implications of the technology as it accelerates down exponent highway with ever increasing haste. Despite my own misgivings, I do not view the technology itself as entirely calamitous. The ultimate iteration of generative AI could be marvelous; in many ways it feels like we have entered an exciting new age of technological growth. However, the potentially disastrous effects of its use on our society and culture are indeed cause for concern and ongoing scrutiny. While image generation may have seemingly plateaued, machine learning and large language model (LLM) technology as a whole are still nascent. As it grows the implications of its use, and abuse, are worrying. Get ready for a world where AI character assassinations are rampant, where an increasingly detached population finds themselves in a world where believing what you see is somehow even less reliable than it already - AI generated spoofs have already influenced global politics (Myers and Thompson 2025).
Another worrying trend is the homogenisation of thought caused by LLMs like ChatGPT. According to a recent study outlined in The New Yorker (Chayka 2025), the results of an experiment comparing the essays from three groups - the first writing unaided, the second using Google, and the third using ChatGPT - found that the third group’s essays exhibited an averaging effect, and their electroencephalography scans showed less brain activity. The ChatGPT groups' essays were found to feature common word usage, and despite the essay questions being designed to elicit a broad range of responses the study found that all essays that were a result of ChatGPT usage showed homogeneity in their topical content (Kosmyna et al. 2025). The results of this experiment are hardly surprising when you consider that large language models are designed to filter through datasets, analysing and filtering the content to present the user with the most likely response based on the input prompt. LLMs are, in a word, averaging out the data, reducing it down to the core information that the prompt requested.
In addition to the homogeneity effect, text based LLMs like ChatGPT are also designed to be conversational and to act as a digital servant to the user. This conversational tone paired with the need to gain more users and naturalise the technology, has led to its generally servile demeanour. This is another trend that, I believe, is designed to push users further into the technology, to normalise its use and to psychologically manipulate people through AI generated sycophantism. After all, everyone enjoys a bit of encouragement when they ask a question, and in the case of ChatGPT, you're not just encouraged, you are outright praised for your insightful inquiry, acumen notwithstanding.
The institution of art is threatened by generative AI, dually because of the philosophical implications of its creation, and concerns surrounding the destruction of creative industries. Additionally, LLMs are trained on copyrighted material, a fact that not only suggests that copyright infringement on such a large scale, funded by multi-trillion dollar corporations, can sail by unimpeached, but also contributes to an ever increasing devaluation of art in public perception. What started with Napster in the late 90’s continues today with streaming platforms like Spotify, the devaluation of music, both monetarily and in the public consciousness, is well into its stride (Grimes 2020, Mulugeta 2024, Wayne 2023). The power of music has not gone away, but I would argue that it has become significantly more disposable over the past twenty years.
Generative AI is the latest development in the ongoing quiet destruction of creative industries, simultaneously offering humanity an impressive and powerful marvel of technology. For example, it's easy to imagine a future in which generative AI technology, sufficiently advanced, could allow for a StarTrek holodeck-like experience. The swift advancement of LLM technology also hints at the real possibility of artificial general intelligence (AGI), the term used to describe human‑level (or beyond) intelligence. Truly exciting stuff. The road to these possible eventualities lies before us, unlit and unknowable, but pregnant with ill omen and revelatory enhancement alike. As with any technological advancement, there are those who embrace it and those who fear it. Historical examples of technophobia are numerous, photography being one such example, and synthesizers and digital sequencing being another - topics I discussed in a previous piece (Philip 2025). With this in mind I remain hopeful about the future of generative AI and LLM technology. How artists approach the coming wave, and how they implement AI technologies into their work is of great interest to me, and the basis for my ongoing research. Art is as much a commercial product as it is a soulful expression, and art has always, and will always be exploited for commercial gain and cultural influence. Given the latent fear of industrial collapse, I’m curious to see how generative AI will be used, abused, exploited, and subverted, by vultures and artists alike.
DEFINING ART
Attempts at defining art and what constitutes and artwork through the categorisation of aesthetics, cultural context, institutional conference, and familial or genetic attributes derived from the antecedent works, have littered philosophical discussion for centuries. In the introduction to Theories of Art Today, a collection of philosophical articles from various authors, editor Noël Carroll provides a succinct recent history of attempts at art definition, and the various critiques and subsequent modifications that the key figures in this field (Clive Bell, R.G. Collingwood, Ludwig Wittgenstein, Morris Weitz, William Kennick, Monroe Beardsley, Arthur Danto, George Dickie, Richard Wollheim, and Joseph Margolis) have posited over the course of the twentieth century. I won’t spend time delving into these theories and definitions here as it is a huge topic and one that I will explore in future. However, in his article, Art, Practice, and Narrative, Carroll proposes that we think of art as a cultural practice (Carroll 1988). This is in contrast to Bell and Collingwood’s stage one essentialism, Weitz’s open concept approach, and Dickie’s institutional theory. Carroll’s cultural practice approach revolves around two key ideas. Firstly, the consideration of art as being historical in nature, that is, it is derivative of past practices and changes with cultural evolution, redefining the practice whilst remaining recognisable as contiguous with antecedent practice. And secondly, that it is narrative in nature, in that the historical can be seen as running through a narrative. The throughline from one artwork to the next can be viewed as adhering to one of three narrative forms: repetition, amplification, and repudiation. Put simply, an artist may repeat a previous form or practice to create a new work, alternatively they may build upon a prior form by amplifying its defining characteristics, or the artist may reject a prior form and craft a contrary response. These three forms of art evolution are the connective tissue that joins all art - the past, present, and future - together, creating a historical narrative that affords us a rational framework from which to identify artworks of variable form whose tenuous aesthetic or expressive links may seem objectively distinct or opposed. Carroll’s approach in defining art, or perhaps more accurately, defining how we can identify art (a subtle distinction but an important one, I think), is useful when considering what constitutes an art practice, and what art is. While it is not the final word on the topic, I appreciate Carroll’s particular slant.
Katherine Wojtkiewicz attempts a definition of art in her article, How Do You Solve a Problem like DALL-E 2? Wojtkiewicz states:
“In an attempt to sidestep the larger conversation regarding the essence of artworks, I will not appeal to any specific account’s definition. Instead, I will posit two necessary conditions that appear in, and so are compatible with, an array of existing definitions of art: first, it must be created with the intention of being experienced as an artwork, and second, and relatedly, the artwork exists because of intentional action by the artist.” (Wojtkiewicz 2023)
I really like this definition, it's loose but decisive, and I feel it gets to the heart of the discussion. Other definitions tend to focus on a qualitative appraisal of an object before it can take its place in the annals of art history, whereas Wojtkiewicz broadly claims that art is ultimately defined by its intent, not simply its quality. This is important because the perceived quality of an artwork is entirely subjective, leaning heavily on the context of its place in history and within the aesthetic trappings of the art world. A common refrain regarding what is or is not art is “that's not art, a child could have done it,” implying that children can’t make art, a claim I take umbrage with. It is for these reasons that I approve of Carroll’s posit that art is a cultural practice, couching it in a functional, utilitarian context that relies on historical narrative, comparable to other human practices that shift and evolve with the needs and perspective of the current culture.
“... cultural practices need not be static. They require flexibility over time in order to persist through changing circumstances. They tolerate and indeed afford rational means to facilitate modification, development into new areas of interest, abandonment of previous interests, innovation and discovery. Practices sustain and abet change while remaining the same practice. Practices do this by a creative use of tradition, or to put the matter another way, practices contain the means, such as modes of reasoning and explanation, which provide for the rational transformation of the practice.” (Carroll 1988)
In the same article (How Do You Solve a Problem like DALL-E 2?), Wojtkiewicz, discusses the public perception of AI generated art. Wojtkiewicz suggests that what audiences find so distasteful about AI generated art has little to do with the artwork itself, instead stemming from social factors such as the fear of one's livelihood being supplanted by generative AI (Wojtkiewicz 2023). I would argue that, in the current sphere of AI rhetoric and critique (AI generated content is increasingly referred to as “AI slop”), negative public perception of generative AI is primarily a result of fear. Fear of the aforementioned loss of livelihood, the loss of creative industries, perhaps even the loss of creativity itself. Such arguments fall a little flat for me. While I agree that AI technology is definitely causing a shakeup of various industries, and does project a somewhat gloomy future for the creative arts, I don’t actually believe it will ever be the death of art, or even the death of specific art mediums.
EMPATHY
A core tenant of art’s usefulness is how it makes us feel; allowing us to stand in the shoes of the artist (or perhaps the subject), art is the exploration of alternative realities. A romantic notion perhaps, but I would argue an accurate one. Art need not elicit a favourable response for the enrapture of an audience, its existence is nonetheless a challenge to our perception of reality, forcing us to take some kind of stance that is emotionally motivated. I have long held the opinion that every artwork is a bespoke world, aesthetically defined by the artist, inviting the audience to inhabit it for a time. These created worlds are a testament to our imagination: a defining characteristic of humanity with strong links to empathy. In a lecture at Edinburgh College of Art, Brian Eno states:
“Imagination is obviously the central currency of culture, imagination is how we bring new things into being, imagination is also how we learn to empathise with each other … All of our cooperation, really, is based on the act of imagination called empathy and, the act of imagination is something you have to practice. Of course you were born with it, but it doesn’t just stay still, you have to practice and one of the reasons we like art I think, is because it gives us the chance to practice imagining over and over and over again, to keep those circuits running.” (Eno 2017)
What Eno touches on here is that making art and experiencing art are acts of empathy; a reach into culture to draw out empathic abstractions on ideas we want others to consider. Empathy allows us a sort of telepathic inference that assists communication and cooperation. Carroll describes a similar process:
“Of special note here are the roles of makers and receivers. In many respects, the activities or practices of these two groups diverge. And yet, at the same time, they must be linked. For art is a public practice and in order for it to succeed publicly - i.e., in order for the viewer to understand a given artwork - the artist and the audience must share a basic framework of communication: a knowledge of shared conventions, strategies, and of ways of legitimately expanding upon existing modes of making and responding.” (Carroll 1988)
The communication between artist and audience is a sort of cooperation - a facet of empathy. To this end, art appreciation is a byproduct of empathy. Therefore, it stands to reason that human made art is important to other humans because of empathy. Artworks allow us to create and explore different worlds - different ways of being. When we experience a piece of art we are placing ourselves within it, imagining the artist's intent, empathising with the artist through their work. In stark contrast to this, large language models do not feel, any feelings attributed to their creations are a heterogenous amalgam of various artists' emotional worlds formed into a new work - a remix of sorts. Once a human knows that what they are empathising with was created by a machine it can feel hollow, like we’ve been tricked. It is of considerable importance to human minds that the artwork we consume be created by other human minds - by someone not something - so that we can exercise our imagination and empathise with their perspective, or at least have the option to.
ROMANTICISM
Romanticism in art discussion is as prevalent as water in the ocean: it is inescapable. I have no intention of diminishing romanticism in art appreciation, but as an artist I find the notion that art works emerge fully formed from a genius mind, a divine gift from the heavens, to be a fallacy. Art appreciation is full of romanticism because we experience art fully formed, a perfected piece that we consume. We know very little about its origins unless it is being exhibited in a gallery where we are encouraged to digest the cliff notes on how to appreciate a given piece. Generally speaking, we consume art by tidally crashing into it. When fully immersed we find ourselves swept away by currents of feeling. This is particularly true of music; fairly unique among art mediums in that it seemingly has a direct line to our emotional core.
I believe art creation is fundamentally different from art consumption. While both require empathy, art creation relies on accumulated knowledge and technical know-how; artists tend to approach their work systematically because they have learned that certain practices produce better results. A budding artist may begin their journey through a love of art and romantic notions regarding its creation - after all that is one of the great draws of art - but these romantic forays tend to, eventually, give way to a carefully considered art practice. Clinical as this may sound, what is important to acknowledge here is that a clearly defined art practice - a structure and ritual for art creation - fosters the artist's ability to create quality art with greater consistency. Just as Carroll posits, art practice is akin to a cultural practice in that it is “aimed at achieving goods that are appropriate to the forms of activity that comprise them, and these reasons and goods, in part, situate the place of the practice in the life of the culture” (Carroll 1988).
A personal and emotional connection to the work is still an essential for the artist, but I would argue that the primary emotional byproduct of art creation, for the artist, is the satisfaction of creation itself. It's the journey, not the destination. By contrast, we are drawn to consume art because it improves our lives in myriad ways. Art allows us to explore different ideas in a consequence free environment, affording us mental models of differing perspectives, divergent and/or extreme emotional states, visions of other possible realities, and to relive our own experience with art as the proxy.
Because the process of art creation is often hidden from the public scrutiny, creating (painting, sculpting, composing, writing etc.) is often viewed as magical, fantastical, reserved only for those with a genius mind. As previously stated, artworks can be powerful and emotionally resonant, a phenomena that leads those consuming it to ascribe sentimental gravitas to the work. There is something happening here beyond art fanaticism as a byproduct of its emotional influence. When the art process is hidden our imagination tends to fill in the gaps with fanciful mystique. In the world of sound and music, this obfuscation of the art process is related to the concept of acousmatic sound: “sound one hears without seeing their originating cause - an invisible sound source” (Chion 2025). The word acousmatic is derived from the Greek word akousmatikoi, referring to the probationary pupils of Pythagoras who were made to sit in silence while he lectured them from behind a veil - often referred to as the Pythagorean Veil. Electronic music is rife with acousmatic sound as it is often created using esoteric electronic instruments that do not contain traditional input methods such as the piano keyboard. Electronic music is also made with computers and software that is not generally used or understood by non-musicians. While not intentional, the process of creation is hidden and the sounds used are not easily identified (certainly not to the same degree as a piano or a guitar). People outside of music production might understand that electronic music is made with synthesizers, but they are unlikely aware of any detail beyond that noun which encompasses a wide variety of instruments capable of a diverse array of timbres.
Electronic music artists, Autechre (Rob Brown and Sean Booth), are quintessential examples of acousmatic sound. Their music is characterised by futuristic, glitch laden, generative beats, and atmospheric, alien textures. Autechre's oeuvre is sonically and functionally distinct from common forms of electronic music production. Even within the genre they’re categorised (IDM: intelligent dance music) their music is often challenging, and almost willfully abstruse. Autechre’s penchant for abstraction has conjured intrigue and a desire among fans to discover their production secrets, which are rarely volunteered. Booth and Brown don’t seem concerned with actively keeping secrets, as a 2004 interview with Sound On Sound attests (Tingen 2004), however they aren’t particularly forthcoming either. As if in an attempt to layer on even more mystique, Autechre’s live shows are always performed in total darkness without any stage lighting at all. When questioned about their proclivity for dark performances Sean Booth said: "I think something happens when you listen to music either with your eyes closed or in the dark. The music reaches further into you … We're just really into sound. It always seemed such a basic and obvious thing to do" (Eamon Sweeney 2018).
It's interesting to me that while the obfuscation of process can surround an artist with mystique, our inability to comprehend the complex process by which generative AI creates has a tendency to unsettle us. Of course there are more reasons for this than the obscurity of the process. We are social creatures, and as such are compelled to connect with others and communicate through the full range of expression, including art. A machine holds no such compulsion for us, we are drawn to a machine for what it can do for us, how it can help us perform tasks. We might personify machines, we might find them so important to our existence that we ascribe emotional complexity to them, even if it is only in regards to how they affect us and our ability to execute on our desires. Animals are another interesting example here because we have traditionally used animals as tools throughout history, tasked with all manner of jobs to assist us in our lives. But animals (especially mammals) clearly experience emotion, allowing us to empathise with them, something that is (currently) impossible to experience with a machine.
COLLECTIVE HALLUCINATION & THE UNKNOWABLE
AI generated imagery highlights an internal conflict that I experience almost every time I see a generated image, something that I expect will ring true for others also. As image quality improved I found myself increasingly drawn to it; fascinated by the power of the technology behind it; it felt like a form of magic, almost like a window into a world we aren’t supposed to see. This may seem like a slightly odd comparison, but the closest feeling I can relate this to is that of hallucinatory visualisations brought on by psychedelic substances such as psilocybin, LSD, and DMT. While those visual experiences come coupled with a bodily, emotional, and existential experience - something which cannot be discounted - the visual aspects of the experience alone can often feel like one is looking beyond reality into some kind of universal source code. These experiences can be very powerful because what we are seeing and experiencing hints at some other plane of existence, a layer of reality that only under these circumstances we are allowed to see. What I find fascinating about AI generated imagery, and why I feel like it compares to psychedelic experiences (at least the visual component of those experiences), is that every AI image feels to me like a technologically powered, digital hallucination, derived from the collective consciousness of human experience and expression.
I’m not alone in this feeling. Artist Refik Anadol creates visual artworks derived from image datasets fed into a neural network. As he describes one of his artworks which displays ever evolving images of New York - its skyline and nature - derived from a dataset of thousands of images, as a collective memory of New York. Anadol states:
“Machine looks at this information like a human being, but it's kind of more like collective memories than personal memories, because a building in New York can be explored by thousands of perspectives, from multiple angles, from a different time of the year. It's more like an honest memory for a machine 'cause it feels more totalitarian, and feels everything and everyone than just one person … When a machine learns from outputs and memories like this, it can create an alternative reality. It look at the patterns of the trees, the buildings, the nature, the people, every single thing hidden inside these image corpus. Seeing a machine giving a context of data and giving an hallucinative output was something really inspiring.” (Nast 2020)
Continuing with the fantastical, horror themes and gory imagery are rampant in the world of AI image generation. While there will always exist a general fascination with horror in art and media, there is a certain quality to AI imagery that plays into that theme. One famous example is the persistent image of a woman dubbed Loab by Steph Maj Swanson (AKA Supercomposite on X/Twitter) who discovered this disturbing figure using negative prompt weights, “in which a user tries to get the AI system to generate the opposite of whatever they type into the prompt” (Rose 2022). What's so fascinating about this story in particular is that it perfectly plays into a latent fear of the incomprehensible process by which the AI model generates content - not unlike the phenomena of acousmatic sound and the associated reverie that tends to follow. Akin to a ghost in the machine, obscured by complex technology far beyond the capabilities of a single human mind. The method by which this Loab character repeatedly appeared in generated imagery makes it irresistible material for a new urban legend. Even without the persistence of Loab, there is an unsettling surrealism to a lot of AI generated content. The ghoulish je ne sais quoi that so often permeates AI images is notably distinct from the intent of the prompt, something that is clearly demonstrated by negative prompt weights. When paired with horror themes the ominous is accentuated. Likewise with AI generated imagery that is intentionally weird in nature (of which there is a lot), the eeriness of the medium bolsters the themes of the prompt.
When an experienced prompter (prompt artist? prompt engineer?) uses the technology, equipped with the knowledge and skillset to execute on their prompt, the resulting images can be beautiful, powerful, horrifying, and above all provocative. When I think about the imagery that really hits home for me, the AI generations that feel like a magical window into an eerie unknowable layer of reality, it tends to be imagery that is wholly referential to existing artworks by human artists. Artists whose imaginations cooked up a vision of reality that takes me to that place of the unknowable, the forbidden, the horrific, the beautiful, the intriguing (artists such as H.R. Giger, or Zdzisław Beksiński). Granted, not every AI image is a divine manifestation, many represent the workshop sweepings of the water cooled, power guzzling GPU server farm floor where these calculations originate (Zewe 2025) - on that note, there is a perverse fascination with the notion that we are increasing the rate of our planet's demise to see what it would look like if Star Wars was a spaghetti western directed by Sergio Leone, or what the cartoon Simpson family would look like if they were real. Given that generative AI is only as good as the material it is trained on, what I am really fawning over when I view an AI image is the heterogeneous imaginations of countless artists.
And therein lies the internal conflict I referenced earlier: AI generations are an eternal repository of artistic remix. The sorts of aesthetics that I enjoy can be perfectly replicated with new variations ad-infinitum. There is an allure to this, something in my brain wants to see more examples of aesthetically pleasing or disturbing imagery, even knowing that it is all derivative, there is an addictive quality to it. Of course, the same could be said for human made art, if it is to my taste then I want to see, feel, hear more of it. Human artistic output is finite, whereas generative AI can provide an endless supply of enticing imagery that one could spend an eternity perusing. And the uncanny, otherworldly allure gives the creations an extra layer of mystique that feels like a form of magical realism. It’s vaguely seductive, and I find that just a little bit worrying.
LICENSING TURMOIL - THE DEAD SPEAK
As the models have advanced, this unsettling, perhaps uncanny valley effect, has diminished, especially as the models incorporate a wider range of input. A recent example was the addition of Studio Ghibli style animation to OpenAI’s ChatGPT image generator which instigated a substantial influx of users of the platform and a deluge of Ghibli style images pasted all over the internet (High 2025). This development also sparked some amount of outrage - or perhaps just some commonplace internet schadenfreude - with a 2016 video of animator/filmmaker Hayao Miyazaki resurfacing in which he refers to an AI assisted animation system as “an insult to life itself” (Manhattan Project for a Nuclear-Free World 2016).
Another strange wrinkle in generative AI’s development is the inclusion of an AI voiced Darth Vader in the video game, Fortnite. In 2022 actor James Earl Jones who voiced the Darth Vader character, signed over rights to use his voice to Lucasfilm (owned by Disney) and allowed a Ukrainian AI company called Respeecher to train their AI on his voice so that Lucasfilm could use his voice for the character moving forward (Lammers 2024). The AI version of Jones’ Darth Vader voice was used in the Obi Wan TV series and has now made an appearance in Epic Games’ Fortnite video game in which players can talk to Vader and get generated responses from the character. This seemingly benign appearance of Vader in “Fortnite” has led to the character saying all manner of things including racial slurs and swearing. Naturally this garnered a lot of attention in the brief window of time that Vader appeared in the game (it was a limited time Star Wars themed season of the video game) culminating in SAG-AFTRA filing “an unfair labor practice charge with the National Labor Relations Board against Epic subsidiary Llama Productions for implementing an AI-generated Darth Vader voice in Fortnite … without first notifying or bargaining with the union, as their contract requires” (Edwards 2025). Following on from this increasingly absurd scenario, Disney (the owners of Lucasfilm and Marvel) and Universal have sued Midjourney, one of the premiere AI startups, for copyright infringement (Barnes 2025). While not directly connected to Darth Vader in Fortnite there is a certain poetry to this series of events, almost like the ouroboros - the snake eating its own tail.
SOCIETAL IMPACT
As I have previously touched on, one of the biggest concerns surrounding the ever growing presence of generative AI in our society and culture is the fear that it will destroy industry and jobs. And related to this, concerns over copyright infringement and the theft of original works by human artists. Within the context of the society in which we exist these fears are completely reasonable. What I find interesting is that this kind of critique is a result of fear borne out of generative AI’s implied threat to livelihood for individuals, and yet, I can't help but think that the Disney / Universal lawsuit against Midjourney is where the real battle lines will be drawn. The encroachment of generative AI will likely affect the individual in much the same way corporate behemoths tread on individuals as a matter of course; I see it as inevitable. This lawsuit, however, is indicative of how the issue of LLMs being trained on copyrighted material will continue to play out over the next several years. Individuals may suffer through the AI age as industries evaporate and as the need for skilled creative labour diminishes, but it is corporate power, legal disputes, and above all money, that will dictate the course of generative AI.
I am in the midst of reading Ray Kurzweil’s dense 2004 book, The Singularity Is Near. Reading this book has had an unexpectedly positive effect on my generalised outlook for the future of humanity. While human history is rife with horrors, our civilisation need not always be. Kurzweil’s future is reminiscent of what one might surmise of the origins of Iain M. Banks fictional Culture civilization. For those unfamiliar, The Culture, as depicted in Banks’ science fiction novels, is an advanced pan-human civilisation, guided by impossibly advanced AI minds. The Culture series ostensibly portrays a technologically advanced, socialist utopia. While The Culture may be a distant prospect for humanity, the societal structure therein is rather similar to the singularity described by Ray Kurzweil. Kurzweil claims that human evolution has moved beyond the biological and is now progressing on a technological scale. Kurzweil posits that our technological advancement is moving through stages which he calls epochs: physics, chemistry, biology, DNA, brains, and technology. Each of these areas of human development are swiftly approaching the vertical end of exponential growth. Kurzweil outlines the final epoch as a merger of human and technology, assisted by artificial intelligence (Kurzweil 2005).
Far from the apocalyptic depictions of artificial intelligence in cinema, such as The Terminator (Skynet) or The Matrix, Kurzweil predicts AI to take on the role of benevolent custodians in human society. As AI advances beyond human intelligence, the role of AIs in human society will be similar to the AIs depicted in Banks’ Culture series of novels. Human intelligence will also move beyond the grey matter of our brains, with technological upgrades to our biological systems, and comprehensive transference of one's being to that of the technological. Of course this is all speculative, however it is informed speculation based on the progress of human technological evolution. Rather than signalling the end of culture, or the end of art, perhaps the AI revolution will finally bring with it the oft promised automation of our society, leaving humans to do little more than amuse themselves.
HOPE AND TEMPERANCE
Given the current state of the world; the way those with power cling to it, and those without are routinely and progressively stripped of liberties, it is difficult to imagine a utopian outcome as long as humans are at the wheel. Perhaps the birth of true artificial general intelligence (AGI) will bring forth the change we, as a species, so desperately need. I personally predict a near future where current and progressively more advanced versions of the generative AI tools we see today, will seep into every groove of our society. As it stands the technology is functionally little more than an amusement - albeit a powerful one - and we are just now reaching the precipice of rampant AI assisted misinformation… perhaps we are already there. The large corporations funding this AI revolution are trying desperately to find a commercial foothold and relevance for the technology in its current form (Nathan, Grimberg, and Rhodes 2024), but make no mistake, the final form of this tech is still ahead of us and that's where the monetary investment is targeted. As the unlit road snakes toward a potentially ominous future, I want to believe - must believe - that there is hope for humanity in the journey ahead.
Bibliography
Barnes, Brooks. 2025. “Disney and Universal Sue A.I. Firm Midjourney for Copyright Infringement.” The New York Times, June 11, 2025. https://www.nytimes.com/2025/06/11/business/media/disney-universal-midjourney-ai.html.
Carroll, Noël. 1988. “Aesthetics and the Histories of the Arts.” Source: The Monist 71 (2): 140–56.
CarrollNoël. 2000. Theories of Art Today. Madison, Wis.: University Of Wisconsin Press.
Chayka, Kyle. 2025. “A.I. Is Homogenizing Our Thoughts.” The New Yorker. June 25, 2025. https://www.newyorker.com/culture/infinite-scroll/ai-is-homogenizing-our-thoughts.
Chion, Michel. 2025. “Acousmatic.” Filmsound.org. 2025. https://www.filmsound.org/chion/acous.htm.
Eamon Sweeney. 2018. “Autechre: ‘Something Happens When You Listen to Music in the Dark.’” The Irish Times. The Irish Times. July 14, 2018. https://www.irishtimes.com/culture/music/autechre-something-happens-when-you-listen-to-music-in-the-dark-1.3558048.
Edwards, Benj. 2025. “Labor Dispute Erupts over AI-Voiced Darth Vader in Fortnite.” Ars Technica. May 19, 2025. https://arstechnica.com/ai/2025/05/fortnites-ai-darth-vader-spawns-unfair-labor-practice-charge-from-voice-union/.
Eno, Brian. 2017. “Andrew Carnegie Lecture Series – Brian Eno.” YouTube.
Grimes, Taylor. 2020. “Swim into the Sound.” Swim into the Sound. September 2, 2020. https://swimintothesound.com/blog/2020/9/2/how-spotify-made-music-disposable.
High, Matt. 2025. “How OpenAI’s New Image Model Sparked the Studio Ghibli Trend.” Aimagazine.com. Bizclik Media Ltd. March 31, 2025. https://aimagazine.com/articles/how-openais-new-image-model-sparked-the-studio-ghibli-trend.
Kane, Brian. 2014. “Myth and the Origin of the Pythagorean Veil.” Oxford University Press EBooks, June, 45–72. https://doi.org/10.1093/acprof:oso/9780199347841.003.0003.
Knight, Will. 2023. “Where the AI Art Boom Came From—and Where It’s Going.” WIRED. January 12, 2023. https://www.wired.com/gallery/where-the-ai-art-boom-came-from-and-where-its-going/.
Kosmyna, Nataliya, Eugene Hauptmann, Ye Tong Yuan, Jessica Situ, Xian-Hao Liao, Ashly Vivian Beresnitzky, Iris Braunstein, and Pattie Maes. 2025. “Your Brain on ChatGPT: Accumulation of Cognitive Debt When Using an AI Assistant for Essay Writing Task.” ArXiv.org. June 10, 2025. https://arxiv.org/abs/2506.08872.
Kurzweil, Ray. 2005. The Singularity Is Near: When Humans Transcend Biology. New York: Penguin Books.
Lammers, Tim. 2024. “James Earl Jones Signed over Rights for AI to Recreate Darth Vader’s Voice.” Forbes, September 9, 2024. https://www.forbes.com/sites/timlammers/2024/09/09/james-earl-jones-signed-over-rights-for-ai-to-recreate-darth-vaders-voice/.
Loivaranta, Tikli, Johanna Hautala, and Riina Lundman. 2025. “Spectrum of Creative Agencies in AI-Based Art: Analysis of Art Reviews.” Digital Creativity, April, 1–15. https://doi.org/10.1080/14626268.2025.2491471.
Manhattan Project for a Nuclear-Free World. 2016. “Hayao Miyazaki’s Thoughts on an Artificial Intelligence.” Www.youtube.com. November 16, 2016.
Mulugeta, Melody. 2024. “Streaming Services Have Devalued Our Favorite Artists with Unethical Pay Rates.” Los Angeles Loyolan. March 21, 2024. https://www.laloyolan.com/opinion/streaming-services-have-devalued-our-favorite-artists-with-unethical-pay-rates/article_e27641d6-e7f2-11ee-bb7c-ef9ea10b4de2.html.
Myers, Steven Lee, and Stuart A Thompson. 2025. “A.I. Is Starting to Wear down Democracy.” The New York Times, June 26, 2025. https://www.nytimes.com/2025/06/26/technology/ai-elections-democracy.html.
Nast, Condé. 2020. “How This Artist Uses A.I. & Data to Teach Us about the World.” Wired. January 17, 2020. https://www.wired.com/video/watch/obsessed-how-this-guy-uses-machine-learning-to-create-installations.
Nathan, Allison, Jenny Grimberg, and Ashley Rhodes. 2024. “Gen AI: Too Much Spend, Too Little Benefit?” Goldmansachs.com. June 27, 2024. https://www.goldmansachs.com/insights/top-of-mind/gen-ai-too-much-spend-too-little-benefit.
Philip, Miles. 2025. “Can Generative AI Be Art?” Substack.com. Miles Cosmo Philip. April 13, 2025. https://milescosmo.substack.com/p/can-generative-ai-be-art.
Rose, Janus. 2022. “Why Does This Horrifying Woman Keep Appearing in AI-Generated Images?” VICE. September 7, 2022. https://www.vice.com/en/article/why-does-this-horrifying-woman-keep-appearing-in-ai-generated-images/.
Tingen, Paul. 2004. “Autechre.” Www.soundonsound.com. April 2004. https://www.soundonsound.com/people/autechre.
Wayne, Trevor. 2023. “The Devaluation of Artists’ Works: Unfair Payment by Streaming Platforms in the Music Industry….” Medium. August 6, 2023. https://medium.com/@TrevorW49/the-devaluation-of-artists-works-unfair-payment-by-streaming-platforms-in-the-music-industry-b9c8586170cc.
Wojtkiewicz, Kathryn. 2023. “How Do You Solve a Problem like DALL-E 2?” The Journal of Aesthetics and Art Criticism 81 (4): 454–67. https://doi.org/10.1093/jaac/kpad046.
Zewe, Adam. 2025. “Explained: Generative AI’s Environmental Impact.” MIT News. Massachusetts Institute of Technology. January 17, 2025. https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117.

