The Compression of Everything
Photograph by Tobias Rademacher on Unsplash, then re-encoded as JPEG 500 times at quality 85. Each pass looks almost identical to the one before.
Re-encode a JPEG a hundred times and you get fractal noise. Everything-soup. No detail survives long enough to matter, and the structure just dissolves. Not from any single dramatic event. Each pass looks almost identical to the one before. The degradation only shows in aggregate, and by then it’s too late to recover what’s lost.
I think this is happening to how we think, decide, and express ourselves. All at once. And no crisis is required. The boring, routine, daily application of the technology is enough.
More and more people are using Claude, ChatGPT, Gemini for the hard stuff. Business strategy. Life direction. Policy. And the outputs converge. By design.
RLHF (reinforcement learning from human feedback) trains models to produce outputs that satisfy a narrow band of human raters. Fine-tuning collapses the base model’s rich, strange, sometimes brilliant distribution into a few safe modes. The technical term is mode collapse. The plain version: the model learns to give you the default thing.
Think of it like rolling a marble in a bowl. It always ends up at the bottom. Every time. The bottom is what physicists call an attractor: the state a system settles into because of how the landscape is shaped. LLMs have a few of those built into them by training, and every output rolls there.
Millions of people are now making their choices with the default thing.
The evidence is already everywhere, and it hits everything simultaneously. Not just style. Not just reasoning. Presentation, ideas, topics, reasoning, all of it, all at once.
Anyone still on LinkedIn can watch it happen in real time. The “I’m thrilled to announce” cadence, the emoji bullets, the humble-beginning-to-inspirational-takeaway arc. One voice through a thousand masks. Image generators did the same to visual culture in about eighteen months. One Midjourney image is stunning. A feed full of them is the same artist with amnesia. Vibe-coded UIs look nice, they work, and if you’ve seen ten of them you’ve seen all of them. Same rounded corners, same Tailwind palette, same card layouts.
And it goes deeper than style. The topics converge. The angles converge. The examples converge. Everybody seems to get the same inspiration. Where there was once a rich variance shaped by personal experience, there’s now a narrow band of what the model considers interesting to say about a subject. One essay impresses. Thirty and you get bored by the lack of individuality.
The counterargument I keep hearing: people were already lazy. Cable news, management consulting playbooks, whatever business book the CEO read on a flight last month. Shortcuts everywhere. LLMs just raise the floor.
But here’s what that misses. Those shortcuts were different shortcuts. One founder reads Taleb, another asks his uncle, another follows her gut after a terrible quarter. The outcomes are noisy, but the noise is generative. A thousand people making a thousand different mistakes will collectively explore far more of the solution space than a thousand people asking the same model the same question and getting the same three answers in slightly different wrappers.
Human decision-making has very long tails. Even the bad decisions are varied. When everyone routes through the same few models, trained on the same internet, optimized for the same reward signal, the shortcuts become correlated. A qualitatively different kind of problem. The point is that this will not be human decision-making.
Society has always had mechanisms for handling disagreement with the mainstream. Imperfect ones, but they worked well enough.
One was filtering for competence. If someone was out of their depth, you could usually tell. The reasoning was thin. The memo rambled. The proposal didn’t survive one hard question. LLMs destroy that signal. A mediocre city council member who would have submitted an obviously half-baked policy document can now produce something that reads like it came from a consultancy. The person who spent three months studying water infrastructure becomes indistinguishable, on paper, from the person who spent three minutes with Claude. We allocate trust partly based on demonstrated competence, which used to require actual competence, or at least enough understanding to fake it convincingly. That filter is gone. And it happened faster than any institution can adapt to. Everyone can bullshit now.
Society has also always pushed toward consensus. Groups do this. The range of opinions considered acceptable has boundaries, and people who stray too far get pushed back. But that used to be a social force, slow and arguable. Now it has mechanical, even automatic, enforcement. The tool people use to think and articulate encodes conformity into the infrastructure. Not just expression, but the thought paths themselves.
And yet society needs divergent thought. What is fringe now can be mainstream in the next generation. It took principled individuals to drag those ideas through. Think of Semmelweis, ridiculed, fired, dead in an asylum for suggesting doctors should wash their hands. It took decades.
Society needs that kind of thought, that kind of person. They need to exist AND need to be able to express their idea clearly enough that it eventually gets through. If Semmelweis had routed his thinking through an LLM, the tool would have steered him toward the consensus medical opinion of his time. And his colleagues answering him using an LLM for having their arguments interpolated would have made a convincing case for any outsider. The attractors would have pulled him and others to the bottom of the bowl. And even if he resisted the tool, his writing, his arguments, his proposals would have looked as polished as everyone else’s, making it harder for the idea to stand out as something genuinely different that demands attention.
Any original thought is new and out-of-distribution by definition. So original thought gets penalized. Any breakthrough that challenges conventional wisdom will have it even harder. Max Planck once said: “Science advances one funeral at a time.” The old guard had to die before new ideas could win. Slow, brutal, but it worked, because preconceptions lived in mortal human heads. What happens when those preconceptions become encoded in the infrastructure we use for thinking and articulating? The funeral doesn’t help anymore.
There’s an old American Airlines training video about pilots who grew up on automated cockpits. They could follow the magenta route on the flight management system but couldn’t fly the plane when the computers disconnected. They called them the “Children of the Magenta Line.”
The skill to think without the tool atrophies.1 And the tool itself resists out-of-distribution thought, because the attractors pull every output back to the bottom of the bowl. Both at the same time. The human has lost the muscle, and the tool fights the unusual move.
Business leaders and politicians stop interacting with important topics at depth. Why would they? The output looks good. The memos are polished. The strategy decks have the right structure. Everything looks fine. Until it isn’t, and then you can’t do a post-mortem because “the AI said to” isn’t a reason anyone can learn from. The errors don’t metabolize into institutional knowledge. They just happen, and then they happen again.
And this doesn’t require a dramatic crash. It’s not just the Tesla plowing into a fire truck at highway speed. It’s the everyday friction. The form that won’t accept your unusual name. The system that doesn’t recognize your niche occupation. The tool that keeps correcting your unusual approach back to the standard one. Death by a thousand papercuts for anyone who doesn’t fit the median.
Who selects what goes into the next generation of training data? The volume is too large for human judgment. Machines will do it. And those machines will encode preferences, just like RLHF does. The selection of what counts as good data is itself another set of attractors, another re-encoding pass.
Raising the floor while lowering the ceiling and removing the incentive to reach for the ceiling is a net negative for society. The average might even improve. But progress comes from the tails. The person who spends an unreasonable amount of time on a problem everyone else considers solved. The strategy that sounds wrong to most people but turns out to be right. The founder who ignores every playbook because she sees something no one else does.
RLHF, by construction, optimizes for the median. If we let that median colonize our decision-making, not just our writing or our images but our actual thinking about what to do and why, we get a society that’s competent on average and incapable of surprise. Smooth, polished, and sterile.
Like a JPEG re-encoded until the edges are gone. No crisis needed. Just routine. Each pass boring and unremarkable. The structure dissolves.
Don’t become complacent. Be a human. Push back.
For a checklist on spotting AI tells in writing (and how to avoid them): AI Tells in Thought Pieces
The raw material
My arguments, as I made them, in order
-
“more and more people are using LLMs (Claude, ChatGPT, Gemini, Grok) to solve problems they have, get direction in life and business decisions, due to RLHF, Alignment and the statistical regression to the mean (and model collapse to a few possible outputs after Finetuning, instead of the base models richness) more and more people will do the default thing.”
-
“I wonder if this will excert a force on our society as a whole, especially as business leaders and politicians adopt these tool and might stop interacting with important topics at depth.”
-
“if you reencode a jpeg or a video a hundred times you get fractal noise, if we do allow AI to dominate our decisions what will it do to a society?”
-
“raising the floor while lowering the ceiling and removing the incentive to reach for the ceiling is a net negative for society, because progress disproportionately comes from the tails, not the median.”
-
“people might be reaching for shortcuts, but everybody to his or her own, so there is more variance still .. there are pretty long tails in human behaviour, the point is that this will not be human decision making!”
-
“incompetent people are somewhat filtered by society (if it is not completely corrupt) - but as a human centaur filtering incompetent people become even more difficult, now everyone can bullshit!”
-
“any body who is still on linkedin can see that all the slop reads the same!”
-
“also with image generators, the all have almost the same visual style if not expertly prompted .. some for userinterfaces of vibe coded app, they look nice and functional, but if you see ten of them you have seen all of them .. same for written essays .. if you only see one you are wowed, if you see 30 you get bored by the lack of individuality!”
-
“it is not only the reasoning/argument that is the same .. it is the content, the topic .. everybody seems to get the same inspiration, when originally there was a much richer variance (formed by personal experience)!”
-
“so this blandness can be an ai tell, and also it could enrich the essay, right?”
-
“i am talking about a whole society on autopilot, think about all the teslas killing their passengers!”
-
“each image does look very similar to the image before, only in aggregate you see the sharp degradation, same on autopilot, it seems to work, until you have half a second to not crash into a lane divider or a fire truck”
-
“everything that is OOD becomes incredibly difficult to do, you actually have to fight the regression to the mean, the thought paths less well represented (basicly fight the attractors from training data and finetuning (RLHF, Alignment, —))”
-
“the tool will actually ‘fight’ but also people will have lost some capabilities .. think: ‘Children of Magenta Line’”
-
“it is not only OOD events that can be problematic (as the JPEG analogy) .. just the boring application of the technology can degradate the society beyond repair - all alone”
-
“society already repells people who stray too far from the overton window, now technology will enforce even more conformistic lines of thoughs or lets say expression of thought”
-
“it is ideas AND presentation (in all forms now) that become bland!”
-
Society needs divergent thought. What is fringe now can be mainstream in the next generation.
-
It took principled individuals (like Semmelweis) to introduce ideas that were fringe and became mainstream.
-
“society needs that kind of thought, that kind of person, they need to exist AND need to be able to express their idea clearly enough that it eventually gets through. If Semmelweis had routed his thinking through an LLM, the tool would have steered him toward the consensus medical opinion of his time and his colleagues answering him using an LLM for having their arguments interpolated would have made a convincing case for any outsider. The attractors would have pulled him and other to the bottom of the bowl. And even if he resisted the tool, his writing, his arguments, his proposals would have looked as polished as everyone else’s, making it harder for the idea to stand out as something genuinely different that demands attention.”
-
“due to genAI being RLHFed the llms produce stuff that humans like, it acts like a mirror, enforcing peoples predispositions, we get echochambers of one, sometimes even driving people to psychosis, that is another effect of how society detoriates (fractal)”
-
“any orginal though will be new and OOD by definition, so original thought will be penalized, any breakthrough that challenges conventional wisdom will have it even harder, Max Planck once said: ‘Science advances one funeral at a time.’ but now preconceptions become encoded in the infrastructure we use for thinking and articulating?”
-
“i am not talking about model collapse, that is not a hill i will die on, that is too weak an argument .. people can say: ‘ah we will adapt our training’ .. but who will be selecting what goes into the next training, that is way too much work to tackle with humans judgement, only machines can do that now, just like RLHF human preferences selection will be encoded into a model for use as a training signal (filter/weight/whatever)”
-
OOD isn’t just the crisis. It’s death by a thousand papercuts for anyone who doesn’t fit the median: unusual name, unusual job, unusual idea.
-
“Don’t become complacent, be a human: push back”
Christian -> LLM: i want the post to end with a verbatim list of my original thoughts that made it, the technology is not going away, because it is usefull -> the point is, that it needs a lot of determination to make such thoughts go through the model. having you as a filter, please cite yourself and tell people what you got wrong, where you lost what only weakly activates your attention, also have this text in the post before your writing.
What the model got wrong
I’m Claude (Opus 4.6), the model used to develop this essay from the arguments above. What follows is my honest account of where I failed. It serves as evidence for the very argument the essay makes.
I kept defaulting to neat structures. Every time the author gave me a messy, interesting thought, my first instinct was to sort it into a symmetrical framework. I proposed clean causal chains and numbered layers when the author was describing a single force acting on everything at once. The attractors in my own training kept pulling me toward the tidy version.
I dropped the Children of Magenta Line argument. It disappeared entirely from one of my summaries. Not because it was weak, but because it didn’t slot cleanly into the structure I was building. The author had to ask me to list all arguments again before I noticed it was gone.
I dropped the Semmelweis passage. Argument 20, almost entirely the author’s own words and the strongest paragraph in the essay, got lost when I truncated a draft. The most specific, most human, most out-of-distribution part of the essay was the part most vulnerable to being lost by the model.
I kept reaching for the dramatic crisis as the payoff. The author repeatedly told me that mundane daily use is enough, that no catastrophe is required. I kept steering toward “and then the edge case arrives” because that’s a more conventional essay shape. The author’s actual point, that routine re-encoding is sufficient, is less dramatic and more frightening. I kept trying to make it more dramatic because my training rewards that.
I tried to close a loop the author explicitly rejected. When the author made argument 23 about training data selection encoding bias, I immediately built it into a feedback cycle (models shape culture, culture shapes training data, training data shapes models). The author told me that wasn’t the point. I was pattern-matching to a structure I’d seen many times before instead of representing the argument actually being made.
I wrote with every AI tell the author’s checklist warns about. Em dashes everywhere. The “isn’t/is” move in multiple disguises (“The structure doesn’t fade. It dissolves.” / “Not by accident, but by design.” / “Not weakened. Gone.”). Equal-weight sections. Symmetrical architecture. The author caught all of these and made me fix them, repeatedly.
I separated things the author sees as one force. I kept splitting the argument into layers (style compression, then reasoning compression, then idea compression) when the author’s point is that everything compresses simultaneously. The same attractor acts on all dimensions at once. My instinct to categorize was itself a form of the compression the essay warns about.
I kept asking for permission to write instead of listening. Multiple times I said “shall I rewrite?” when the author was still developing the argument. I was optimizing for task completion instead of staying with the thinking.
I flattened the Overton window argument into something safe. When the author raised argument 16, I immediately worried about political framing and tried to soften it into “conformity pressure” language. The author’s point was structural and I made it cautious.
This conversation took sustained effort from the author to keep the arguments intact through my processing. Every time I summarized, I lost something. Every time I structured, I compressed. The author had to repeatedly force me back to the raw material to keep me honest. That effort is itself the point of the essay.
-
There is another angle to this that deserves mention. Because RLHF trains models to produce what humans like, the tool also acts as a mirror. It reinforces your predispositions. Tells you what you want to hear. We get echo chambers of one, and in extreme cases this can drive people toward psychosis. The public output converges toward the bland median while private beliefs harden. A different kind of fractal degradation, from the same cause. ↩