What is Drift Factory?
Take an image. Ask an AI to copy it. Feed that copy back as the next input. Repeat a hundred times.
Each pass, the model doesn't scan pixels — it interprets. It compresses the image into an abstract mathematical space, then re-imagines it from that compressed meaning. Like translating a novel into a summary and back into a novel. Each translation is close, but something is always lost. And something is always added.
Over a hundred iterations, those small errors compound. Textures melt. Faces warp. Straight lines bend. The model starts hallucinating details that were never there. What emerges isn't noise — it's structured degradation. An aesthetic that's unique to each model, each input, each set of parameters.
The drift is the art.
The Process
- 01 Pick a source image — a portrait, a bridge, a pizza box
- 02 Ask an AI image model to replicate it exactly
- 03 Feed the output back as the next input
- 04 Repeat 100+ times — each pass drifts further
- 05 Compile every frame into a time-lapse video
- 06 Add a soundtrack and release the batch
What's actually happening
It's a game of telephone for images
AI image models don't copy — they interpret. Every time the model processes an image, it compresses it into a lower-dimensional "latent space" — a mathematical representation that captures the broad meaning but throws away fine detail. Then it reconstructs an image from that compressed version. Each encode-decode cycle preferentially erases detail while preserving structure. That's why text degrades first and faces survive longest: text is pure fine detail, while a face is a structural pattern the model has seen millions of times.
Errors don't add up — they multiply
Each iteration doesn't just add a fixed amount of noise. It introduces errors that change what the next iteration sees. The model interprets its own mistakes as intentional features and faithfully reproduces them. A slightly warped eye becomes a more warped eye becomes an abstract spiral. By frame 50, the image has forgotten the original entirely — but it hasn't collapsed into randomness. It's found something else.
Models dream in attractors
Strip away the prompt. Strip away the input. What's left is pure model — generating from its own biases with no connection to external reality. Every model has "attractors": stable visual patterns it converges toward regardless of where it started. Our logo icon, after 50 passes, settled into a circular form resembling a no-entry sign. A pizza mascot became a porcelain doll. These attractors are the model's unconscious — the image it dreams when no one is asking it to dream anything specific.
The model doesn't collapse into chaos. It collapses into certainty.
That's worse, and more beautiful. The final state isn't gibberish — from the model's perspective, it's a self-consistent, coherent reality. A "pathological equilibrium" where the output perfectly satisfies the model's own expectations. It just happens to be a reality that looks nothing like ours.
A lineage of beautiful decay
Artists have been making beauty from broken signals for over 50 years. We just have a new machine to break.
Alvin Lucier — "I Am Sitting in a Room"
Lucier recorded his voice, played it into a room, re-recorded it, and repeated 32 times. His words dissolved into pure tone — the resonant frequencies of the physical space. The room's architecture was literally revealed through the destruction of the original signal.
Lucier's room had resonant frequencies. AI models have them too. We call them attractors.
William Basinski — "The Disintegration Loops"
While transferring old tape loops to digital, Basinski discovered the magnetic coating was flaking off the tape as it played. He let it run and recorded the result — music slowly consuming itself. He finished the recordings on September 11, watching the Manhattan skyline from his Brooklyn rooftop.
Basinski's decay was accidental. Ours is deliberate and repeatable — but the aesthetic is the same: beauty emerging from information loss.
VHS generational loss
Every VHS copy introduced noise from the magnetic tape's imperfections. Colors bled, clarity dropped, grain accumulated. Bootleg culture built an entire aesthetic around this — the degraded look now signals nostalgia and underground authenticity.
VHS degradation was a dumb physical process. AI degradation is an interpretive one — the model isn't adding noise, it's misunderstanding.
Model collapse enters the scientific literature
Shumailov et al. published in Nature that AI models trained on recursively generated data suffer "irreversible defects." The tails of the distribution disappear first — the rare, the unusual, the unexpected. The bland middle survives longest. By 2025, researchers found that even 0.1% synthetic data in a training set is enough to trigger collapse.
The thing that keeps AI researchers up at night is our raw material.
What we've learned
Text dies first, faces survive longest
AI models don't read text — they see it as shapes. Text is the most fragile element. But faces, which humans are most attuned to, are structurally resilient because models have been trained on millions of them. "DRIFT FACTORY AI" became "DRIFY FACTORY 4!" by frame 10. Portraits held recognizable structure through frame 50.
Cheaper models make better art
A $0.01/image model produces more dramatic, faster collapse than a $0.02/image model. Less capacity means less faithful reproduction, which means more creative error per frame. Model choice is a creative decision, not just a cost one.
Models degrade into themselves, not noise
The endpoint isn't static. It's a stable attractor specific to each model. Our logo icon didn't dissolve into randomness — it converged into a circle. A pizza mascot became a porcelain doll. Different models find different fixed points from the same starting image.
Filled shapes survive; outlines die
Our logo icon (thin strokes, no fill) collapsed to near-black by frame 25 on a smaller model, while the subscribe CTA (filled shapes, bold text) held up through all 100 frames. The model needs "mass" to anchor against. Negative space and fine lines are the first casualties.
Follow the collapse
New batches drop regularly. Subscribe to watch reality dissolve.