Nothing New Under the Algorithm

The contemporary art world seems caught in a recurring cycle of excitement and apprehension about generative AI. As media scholar Mark Mahoney has noted in the Journal of Media Ecology, this focus on novelty – the “hype” – often obscures history, producing the illusion of a sudden and unprecedented rupture. Yet that illusion hides a more persistent truth: art has always been a collaboration between human intention and non-human systems.

AI, I would argue, is not an exception to this history but an escalation of it. What changes is not the nature of collaboration but its scale, speed, and abstraction. The partnership between human and machine continues a long lineage of shared making, though the balance of agency and the conditions of production have shifted. The unease surrounding this change arises less from the act of collaboration itself and more from the conceptual burden of the word intelligence – and from the economic and ethical dislocations that accompany it.

I.

From its earliest moments, art-making has been a negotiation with the material world. The first painters did not apply pigment to neutral ground but responded to the contours of the cave wall, where a crack might suggest the outline of a bison or the curve of a back. Materials and tools shaped the act as much as the artist’s will.

Sculpture makes this interdependence more tangible. Renaissance sculptor Michelangelo’s claim to “free” a figure already within the marble expresses an awareness of the stone’s agency – its veins, texture, and resistance – which guide as much as they constrain. Such responsiveness is not submission but recognition of shared authorship, a kind of dialogue across species and substances.

Even before the industrial age, artists turned to technological systems to extend their senses. The camera obscura, for example, channelled light to create projected images, making visible a world the eye alone could not stabilise. Dutch painter Johannes Vermeer is thought to have used such devices to achieve new levels of realism. These instruments did not replace perception but reconfigured it. In each case, the finished work arose from an interplay between human decision and non-human logic. These precedents illuminate the conditions under which human and non-human agents have always co-shaped art, prefiguring contemporary challenges.

II.

As technology evolved, so too did our collaborative partners. The invention of the camera in the nineteenth century provoked familiar anxieties: critics declared the end of painting, fearing that mechanical precision would replace the artist’s eye. Yet the new medium became an enduring partner, a device that shared agency with the photographer through its optics, exposure, and timing.

The twentieth century offers further examples of reappropriation and creative misuse. The record player, originally designed for the passive reproduction of sound, was transformed into an instrument for producing new music from existing records. DJs manipulated vinyl, altered speed, and combined fragments from different songs, turning listening into composition. This act of intervention repurposed a consumer technology into a generator of creativity, demonstrating how tools can extend artistic agency beyond their intended function.

The DJ’s practice remained grounded in embodied skill and social context – reading audiences, timing transitions, performing in real time. The turntable held no encoded knowledge; all musical intelligence resided in the performer. Generative AI systems, in contrast, encode patterns extracted from millions of creators. The artist working with AI guides this pre-existing knowledge rather than exercising direct control over raw material. This shift highlights where creative intelligence resides within the collaborative process and invites reflection on the nature of agency itself.

Despite this difference, generative systems can be understood as remix engines rather than autonomous creators. They generate images, texts, or sounds by reorganising material already made by humans. Like the turntable, these systems extend the possibilities of reuse and recombination, showing that originality can arise from intervention within an archive rather than from creating something entirely new.

Experiments in distributed authorship followed similar trajectories. Bauhaus artist László Moholy-Nagy’s telephone paintings demonstrated that the artist’s idea could travel through layers of mediation, each shaped by different forms of skill. Though his specifications maintained control, the work nonetheless explored the separation of conception from execution, raising questions about where authorship truly resides.

Similarly, American composer John Cage’s prepared piano transformed the instrument into an unpredictable system. By placing bolts, rubbers, and spoons between its strings, he invited material chance into composition. The objects were not conscious collaborators, yet they exerted agency through their physical properties. Cage’s method revealed that creativity often lies in guiding rather than controlling a process.

If these earlier experiments decentralised the artist’s hand, AI extends that decentralisation to what might be called the cognitive level—though perhaps this was always partially true. Even the photographer selecting a decisive moment or the DJ reading a crowd exercises a form of curation over possibilities the medium generates. What shifts with AI is the scale and opacity of this relationship. Where the camera, record player, and piano remained bounded by physical limits, AI operates within informational constraints that appear less tangible, recombining patterns across data spaces whose extent can be difficult to grasp. The artist’s role shifts—or rather, becomes more explicitly what it may always have been—from sole maker to director of systems, from commander to curator of emerging possibilities.

III.

The technological lineage of AI begins with an effort to model the brain itself. Early artificial neural networks were inspired by the architecture of neurons, seeking to emulate perception, pattern recognition, and learning. These systems were experimental and limited in capability, constrained by computing power and incomplete understanding of biological cognition. They were tools for exploration rather than autonomous collaborators.

Over decades, these models evolved. Techniques such as backpropagation and deep learning allowed networks to process increasingly complex inputs, gradually approximating higher-level functions. Yet even the most sophisticated neural networks remain simplified abstractions of biological brains. They mimic certain organisational principles – layered processing, associative learning, pattern recognition – without reproducing consciousness, intention, or subjective experience. In this sense, AI has never been a mind in the human sense; it has always been a set of procedural systems capable of simulation rather than understanding.

Generative AI represents the latest stage in this trajectory. These systems do not merely recognise patterns; they recombine, extrapolate, and produce outputs across modalities – text, image, sound – based on vast datasets created by humans. The generative turn abstracts the model from mimicking cognition to acting as a creative engine, capable of producing forms that its designers could not anticipate in detail. It embodies the same principle we see in the turntable or the prepared piano: procedural systems generate novelty, guided and curated by human intention. Unlike early neural networks, generative AI operates at scale, drawing upon the accumulated knowledge of millions of human creators, yet it remains a tool for augmentation, not a conscious collaborator.

This historical arc – from neural mimicry to generative systems – highlights a recurring pattern. AI does not break with past collaborations but intensifies them. Where the camera, piano, and DJ remixed material within physical and social constraints, generative AI remixes knowledge within informational and statistical constraints. It accelerates the process, expanding the scope of recombination while retaining the need for human judgement, taste, and ethical consideration.

IV.

The word intelligence carries philosophical weight. To attribute it to machines is to accept a narrow definition of thought as computation – a process of prediction and optimisation. Such a view omits what distinguishes human cognition: consciousness, intention, and what philosophers of embodied mind call the Umwelt – the lived, perceptual world of an organism. An AI can describe rain, but it cannot experience rain.

Ancient Greek philosopher Aristotle’s distinction between techne (skilled craft), poiesis (creative bringing-forth), and phronesis (practical wisdom) remains useful here. AI can perform techne – skilled, procedural work – and simulate poiesis, the act of bringing something into being. Yet it lacks phronesis, practical wisdom informed by experience, context, and ethical judgement. This form of knowledge cannot be computed; it arises from lived encounter and social relation. AI’s outputs may resemble acts of creation, but they are detached from the horizon of purpose in which human creativity is situated.

V.

Viewing AI as part of a historical trajectory does not diminish the magnitude of change. What distinguishes this moment is the altered nature of constraint. Earlier collaborations were bound by physics – the grain of marble, the viscosity of paint, the fall of light, the friction of vinyl. AI’s limits are informational, defined by data, computation, and architecture. Whilst these constraints differ from physical ones, they remain constraints nonetheless – bounded by energy costs, hardware capacity, and the choices of those who design and deploy them.

AI systems are technological artefacts with histories. They carry the imprints of their authors, the biases of their training data, and the economic structures that fund their development. They are not exceptions to the material conditions of making but expressions of them. What has changed is the scale at which recombination occurs and the degree of abstraction between input and output.

This abstraction can be misleading. AI infrastructure remains physical, tied to the material world in ways less obvious but no less powerful than earlier technologies. Data centres consume substantial electricity and water for cooling, often drawing on non-renewable resources. Rare earth minerals for processors and memory are extracted under conditions that frequently involve ecological and social harm. These systems occupy space and depend on energy, grounding AI in material reality even as it appears intangible.

The ethical and economic consequences are substantial. Generative systems are trained on vast collections of human work, often gathered without consent or attribution. AI thus does not merely collaborate but extracts, transforming cultural memory into proprietary data infrastructure. Unlike earlier forms of preservation – oral traditions, libraries, museums – these datasets are enclosed, governed by corporate interests rather than cultural stewardship.

Economic concentration amplifies these concerns. Whereas tools such as cameras, pianos, or turntables could be owned and maintained by individual artists, access to advanced AI increasingly requires subscription to proprietary platforms controlled by corporations with interests far removed from artistic practice. Market forces are embedded in the creative process in ways that reshape relationships between artist and tool.

Contemporary artists are responding to these conditions in varied ways. Turkish-American artist Refik Anadol creates immersive data sculptures that foreground the material foundations of AI systems, making visible the computational processes usually hidden from view. German filmmaker and artist Hito Steyerl interrogates algorithmic vision itself, turning AI’s classificatory logic into a subject of critical examination. Others use generative tools pragmatically within commercial contexts, negotiating the tension between creative autonomy and economic necessity. These practices suggest that engagement with AI need not be either wholesale adoption or outright rejection, but can involve strategic appropriation, critique, and resistance.

A further risk lies in homogenisation. Material media possess distinctive irregularities; each stone, pigment, or record offers its own resistance. AI, in base operation, tends toward statistical convergence. Techniques can push outputs away from the mean, but the system’s fundamental inclination remains toward generality. The challenge for the artist is deviation – locating singularity within a field optimised for the typical. AI multiplies possibilities while narrowing difference.

VI.

Debates about AI often confuse philosophical and material levels. The issue is not whether machines rival human consciousness but how their operation reshapes artistic labour and perception. Recognising this distinction allows a more grounded response – one that looks beyond replacement anxiety to the practical ethics of use.

Much of the anxiety surrounding AI stems from a deeply held but questionable assumption: that artistic value resides in originality, understood as creation from nothing. Yet literary theory has long challenged this notion. Modernist poet and critic T.S. Eliot argued in Tradition and the Individual Talent that the most individual parts of a poet’s work are often those in which past voices speak most clearly. American literary critic Harold Bloom’s concept of the anxiety of influence revealed that all poets wrestle with their predecessors, their work emerging from this struggle rather than from isolated genius. French literary theorist Roland Barthes declared the death of the author, suggesting that texts are woven from citations, references, and echoes of other texts—what Bulgarian-French philosopher Julia Kristeva termed intertextuality. From this perspective, all writing, all making, is already a form of remix.

If we accept that creativity has always involved recombination—that English playwright William Shakespeare drew from historian Raphael Holinshed, that Irish modernist James Joyce rewrote Homer, that every artist stands on the shoulders of those who came before—then AI’s method of generating work from existing patterns is not a violation of creativity but an externalisation of a process that has always occurred internally. The difference lies not in kind but in visibility. Where human artists absorb influence unconsciously through years of reading, listening, and looking, AI systems make this process explicit, statistical, and mechanical. The discomfort may arise not because AI does something fundamentally different, but because it reveals what we have always done.

Collaboration remains a fitting term, though asymmetrical and historically situated. It implies negotiation, awareness of limits, and shared responsibility. The human role may no longer be confined to direct making but to mediation: deciding what to use, what to resist, and how to frame meaning within automated processes. This is not a diminishment of artistic practice but a shift in emphasis—from the romantic ideal of solitary genius to a more honest acknowledgment of art as always already collaborative, always in dialogue with what has come before.

These choices are not made in a vacuum. Artists must navigate economic pressures shaping access to tools and determining the viability of practice. Addressing these realities may require collective responses: advocacy for fair compensation when work is used in training data, support for open-source alternatives to proprietary systems, and development of new models recognising human creative labour within automated workflows. The question is not only how to use AI thoughtfully but how to ensure its deployment does not concentrate wealth and power further.

In this light, art becomes less a matter of production and more one of discernment – the capacity to select, edit, and situate. Creativity lies in establishing relationships between systems rather than producing forms in isolation. These practices remind us that technology has always mirrored human attention. What is at stake now is not simply authorship but care: remaining attentive to the sources, implications, and consequences of what we bring into being, while resisting pressures that would reduce creative practice to statistically probable outputs.

History does not repeat, but it rhymes. The arguments surrounding AI echo those that accompanied the camera, the phonograph, the synthesiser – each time accompanied by declarations of artistic obsolescence, each time followed by creative adaptation and transformation. Recognising these continuities does not diminish present concerns but situates them within a longer trajectory of technological change and cultural response. The value lies not in proclaiming radical breaks or celebrating absolute newness, but in understanding how current challenges inherit from past negotiations between human intention and technological capacity. This awareness allows us to draw upon accumulated knowledge about artistic adaptation, ethical responsibility, and collective resistance, rather than responding to AI as though we face such questions for the first time. The conversation is ongoing, and we are participants in a lineage of makers who have always worked alongside forces they could guide but never fully control.

#stuffiwonderabout #tingjegspørgermigselvom

Kristoffer ørum @Oerum