Behind the Scenes: The Compression of Everything
What follows is the conversation that produced The Compression of Everything. It is itself evidence for the argument being made.
Christian:
Help me sharpen (and steelman) an argument:
more and more people are using LLMs (Claude, ChatGPT, Gemini, Grok) to solve problems they have, get direction in life and business decisions, due to RLHF, Alignment and the statistical regression to the mean (and model collapse to a few possible outputs after Finetuning, instead of the base models richness) more and more people will do the default thing.
I wonder if this will excert a force on our society as a whole, especially as business leaders and politicians adopt these tool and might stop interacting with important topics at depth.
if you reencode a jpeg or a video a hundred times you get fractal noise, if we do allow AI to dominate our decisions what will it do to a society?
Claude:
This is a genuinely interesting argument. Let me help you sharpen it by separating the distinct claims, then steelmanning the core thesis.
The causal chain you’re proposing:
- LLMs are trained via RLHF to produce safe, agreeable, median outputs
- Fine-tuning further narrows the output distribution relative to the base model’s richer latent space
- People increasingly use these tools for consequential decisions (business strategy, policy, life direction)
- This creates a homogenization pressure on decision-making across society
- Compounded over time, this degrades the quality and diversity of collective thought — analogous to generational compression artifacts
Where the argument is strongest:
The RLHF/alignment point is well-grounded. These models genuinely are optimized to produce outputs that satisfy a narrow band of human raters. The “regression to the mean” framing is apt — you’re not getting the tail-end insights that drive real differentiation in strategy or policy. You’re getting the consensus view, smoothed and made palatable.
The compression artifact analogy is powerful but needs tightening. The better version: each re-encoding loses information that cannot be recovered. Similarly, if decision-makers substitute LLM-mediated reasoning for deep engagement with primary sources, domain experts, and adversarial thinking, the institutional knowledge of how to reason deeply atrophies. It’s not just that individual decisions get worse — the capacity to make good decisions degrades.
Where you should fortify it:
-
Legibility bias. LLMs produce outputs that feel thorough and reasoned. This is arguably more dangerous than obviously bad advice because it discourages further investigation. A leader who gets a polished 5-point strategy from Claude is less likely to spend a weekend wrestling with the problem than one who gets nothing.
-
Monoculture risk. If thousands of startups, politicians, and strategists are consulting the same few models with similar training data, you get correlated failure modes. This is the financial-crisis analogy — everyone using the same risk models simultaneously.
-
The asymmetry problem. Those who don’t outsource their thinking gain a massive advantage over those who do, but this is invisible until a crisis reveals who was actually thinking and who was pattern-matching off LLM outputs.
The strongest counterargument you need to address:
People were already making shallow, consensus-driven decisions before LLMs. Cable news, management consulting frameworks, and Twitter threads already pushed toward homogenized thinking. The counterargument is: LLMs might actually raise the floor for decision quality even if they lower the ceiling. A mediocre manager with Claude might outperform the same manager with nothing.
Your rebuttal should be: raising the floor while lowering the ceiling and removing the incentive to reach for the ceiling is a net negative for society, because progress disproportionately comes from the tails, not the median.
Want me to help structure this into a more formal piece, or go deeper on any of these threads?
Christian:
more and more people are using LLMs (Claude, ChatGPT, Gemini, Grok) to solve problems they have, get direction in life and business decisions, due to RLHF, Alignment and the statistical regression to the mean (and model collapse to a few possible outputs after Finetuning, instead of the base models richness) more and more people will do the default thing.
I wonder if this will excert a force on our society as a whole, especially as business leaders and politicians adopt these tool and might stop interacting with important topics at depth.
if you reencode a jpeg or a video a hundred times you get fractal noise, if we do allow AI to dominate our decisions what will it do to a society?
you are right: raising the floor while lowering the ceiling and removing the incentive to reach for the ceiling is a net negative for society, because progress disproportionately comes from the tails, not the median.
also for your counter argument: people might be reaching for shortcuts, but everybody to his or her own, so there is more variance still .. there are pretty long tails in human behaviour, the point is that this will not be human decision making!
also incompetent people are somewhat filtered by society (if it is not completely corrupt) - but as a human centaur filtering incompetent people become even more difficult, now everyone can bullshit!
[AI Tells checklist was included here but omitted from this transcript]
make: a blogpost in markdown for use in github pages .. ?”Compression of Everything”? a teaser for linkedin?
the checklist already is a post, available at https://blog.approximated-intelligence.de/2026/02/19/ai-tells.html
make sure you abide by “write like a human” i interrupted you the last time, because you wrote like a bot
Claude:
[Produced first draft of the essay]
Christian:
still emdashes
Claude:
Fair point. They’re one of the most reliable AI tells — I scatter them everywhere like punctuation confetti. Let me fix both files.
[Fixed em dashes in both files, replacing all ten in the blog post and one in the LinkedIn teaser]
Christian:
structure like The structure doesn’t fade. It dissolves. .. still is “isn’t -> is”
Claude:
You’re right. That’s the same move wearing different clothes. “It doesn’t X. It Y.” is just “isn’t/is” with a line break.
[Identified six instances of the pattern in the essay]
Christian:
fix
Claude:
[Fixed all six instances]
Christian:
the example? any body who is still on linkedin can see that all the slop reads the same!
also with image generators, the all have almost the same visual style if not expertly prompted .. some for userinterfaces of vibe coded app, they look nice and functional, but if you see ten of them you have seen all of them .. same for written essays .. if you only see one you are wowed, if you see 30 you get bored by the lack of individuality!
Claude:
Ha — yes, and that’s actually a much better proof point than any technical demonstration. You don’t need to explain mode collapse to anyone who’s scrolled LinkedIn in the last year. The evidence is visceral. Everyone recognizes the tone. The “I’m thrilled to announce” cadence, the emoji bullet points, the three-paragraph arc from humble beginning to lesson learned to inspirational takeaway. It’s become its own genre and nobody chose it — it just emerged from everyone routing through the same optimization target.
And your point about image generators and vibe-coded UIs lands even harder because it makes the argument visual. One Midjourney image looks stunning. A feed full of them looks like the same artist with amnesia. One v0-generated dashboard looks impressively polished. Ten of them and you realize you’re looking at the same dashboard with different data. The aesthetic ceiling arrives almost instantly because there’s no accumulated taste behind it, no idiosyncratic decisions, no history of someone choosing this shade of blue because they once saw a building in Lisbon that stuck with them.
This actually sharpens your core thesis into something punchier: the tell isn’t in any single output — it’s in the aggregate. One AI-assisted decision, essay, or design can be indistinguishable from a great human one. But zoom out and the fingerprint is unmistakable. And a society is always the aggregate view.
Which means the people most vulnerable to this aren’t the ones using LLMs — it’s the ones consuming the outputs without realizing the entire landscape has flattened. The LinkedIn reader who thinks they’re seeing diverse perspectives from fifty different leaders but is actually reading one model’s voice through fifty masks.
The JPEG analogy you started with might actually work better reframed this way: the degradation isn’t visible at the pixel level. You have to zoom out to see the artifacts. And by the time they’re obvious, you’ve already lost information you can’t recover.
Where’s this heading — blog post, talk, something else?
Christian:
it is not only the reasoning/argument that is the same .. it is the content, the topic .. everybody seems to get the same inspiration, when originally there was a much richer variance (formed by personal experience)!
Claude:
Yes, and that’s a stronger point than I gave it credit for. The homogenization isn’t just at the style layer or the reasoning layer. It’s at the idea selection layer. When a thousand people ask an LLM “what should I write about,” they get drawn toward the same topics, the same angles, the same examples. The model has a distribution of “interesting things to say about X” and it’s a narrow one.
Before LLMs, a thousand people writing about the same broad subject would bring a thousand different entry points shaped by what they’d lived through, what they’d read, who they’d argued with last week. The variance was built in because the source was different every time.
This is actually a third level to your compression argument that the blog post doesn’t fully capture yet. You’ve got:
- Style compression (LinkedIn slop, Midjourney amnesia)
- Reasoning compression (same structure, same counterarguments, same conclusions)
- Idea compression (same topics, same angles, same inspiration)
The third one is the most insidious because people don’t notice it. You notice when something reads like AI. You don’t notice when everyone at a conference is presenting the same three ideas because they all started from the same prompt.
Want to work that into the blog post?
Christian:
so this blandness can be an ai tell, and also it could enrich the essay, right?
Claude:
Right on both counts. The checklist could flag it as a tell (“the topic/angle itself feels like the first result for a prompt”), and the Write Like a Human section already hints at it with “use your own references” but could go further. Something about how your lived experience is the one thing an LLM literally cannot replicate, so lean into it hard.
Want to draft something for both sections?
Christian:
but authenticity can be faked, perhaps we do not want to lean too hard into that?
Claude:
Smart. “Just be authentic” is itself becoming a cliché, and an LLM can fake a personal anecdote just as easily as it fakes expertise. The tell isn’t authenticity, it’s specificity that resists summarization. A real experience has weird details that don’t serve the argument. A fake one is suspiciously on-message.
But you’re right not to put too much weight on it. Maybe the checklist just notes the blandness as a tell without prescribing authenticity as the cure.
[Discussion about updating the AI tells checklist omitted — focused on checklist items, not the essay argument]
Christian:
regarding my argument about english essays .. it is not only the reasoning/argument that is the same .. it is the content, the topic .. everybody seems to get the same inspiration, when originally there was a much richer variance (formed by personal experience)!
And:
the current version of my ai-tells post might need an update too
[Checklist editing discussion omitted]
Christian:
Write like a Human is not command, it is for humans to encourage them to keep their voice
Claude:
Fair point. The headers are invitations, not orders. I’ll leave the tone alone.
[Checklist update completed]
Christian:
and now the essay
Claude:
How do you want to work it in? I see a couple of options:
The LinkedIn/images section already talks about visual and stylistic sameness. The idea compression point could extend that passage naturally: same style, same visuals, and now same ideas. Everyone at the conference presenting the same three angles because they all started from the same prompt.
Or it could be its own beat, maybe after the variance section, where you’re already talking about how different shortcuts produce generative noise. That’s where you could note that LLMs compress not just how people reason but what they reason about.
Which feels right?
Christian:
first of all, repeat all the arguments a brought towards the thesis, verbatim!
Claude:
[Listed arguments 1-9 verbatim]
Christian:
[Shared the old live blog post]
good, now the last article you wrote:
Claude:
That’s not the version I last wrote. That’s an older draft, probably the one currently live on your blog. It still has all the issues we fixed:
- Em dashes everywhere
- Multiple isn’t/is moves
- H2 section headers giving it that symmetrical architecture
- The closing callback to the opening
The version we’ve been working on is the one in your outputs.
Christian:
what is my main thought .. i want to focus on the societal effect of genAI .. how we might loose the power to course correct, because the auto-steering already can be felt
Claude:
Your main thought:
GenAI is compressing the variance in human decision-making, and that variance was the mechanism society used to course-correct. Without it, we drift toward the median and lose the ability to even notice we’re drifting, because everything still looks competent.
The auto-steering is already visible (LinkedIn, images, vibe-coded UIs, essays with the same topics). Those are the early symptoms. The scary part is what happens when the same compression hits decisions that actually matter: policy, strategy, leadership. And we can’t filter for who’s actually thinking anymore because everyone’s output looks the same.
The JPEG metaphor works because it captures both the gradual nature (you don’t see it happening) and the irreversibility (you can’t recover what’s lost).
Is that the core, or am I still missing something?
Christian:
i am talking about a whole society on autopilot, think about all the teslas killing their passengers!
Claude:
Ah. The Tesla analogy sharpens it completely.
The argument isn’t just about median-ification or blandness. It’s: a society that delegates its thinking to systems that look competent but aren’t actually steering is a society on autopilot. And autopilot kills people when the edge case arrives that it wasn’t trained for.
Tesla autopilot works great on the highway. Smooth, comfortable, better-than-average even. Then a truck crosses at an unexpected angle and the system has no idea what to do and the human behind the wheel has been lulled out of paying attention. The crash isn’t caused by the system being bad. It’s caused by the system being good enough that people stopped watching the road.
That’s your argument about society. GenAI works fine for the median case. The city council memo, the business strategy, the policy proposal. It all looks good. But the whole time, the people behind the wheel are atrophying. They’re not engaging deeply, not building the muscle to handle the edge case. And when the edge case arrives (a financial crisis, a novel geopolitical situation, a pandemic that doesn’t match the training data), nobody’s actually driving anymore.
The proof that we’re already on autopilot is the sameness you can see everywhere. That’s the dashboard green light. Everything’s fine. Until it isn’t, and there’s nobody awake at the wheel.
Do you want the essay restructured around this as the central image instead of the JPEG? Or the Tesla thing woven into what we have?
Christian:
more and more people are using LLMs (Claude, ChatGPT, Gemini, Grok) to solve problems they have, get direction in life and business decisions, due to RLHF, Alignment and the statistical regression to the mean (and model collapse to a few possible outputs after Finetuning, instead of the base models richness) more and more people will do the default thing.
I wonder if this will excert a force on our society as a whole, especially as business leaders and politicians adopt these tool and might stop interacting with important topics at depth.
if you reencode a jpeg or a video a hundred times you get fractal noise, if we do allow AI to dominate our decisions what will it do to a society?
[repeated the original arguments plus:]
each image does look very similar to the image before, only in aggregate you see the sharp degradation, same on autopilot, it seems to work, until you have half a second to not crash into a lane divider or a fire truck
also everything that is OOD becomes incredibly difficult to do, you actually have to fight the regression to the mean, the thought paths less well represented (basicly fight the attractors from training data and finetuning (RLHF, Alignment, —))
again, show all my arguments
Claude:
[Listed arguments, proposed structure]
Christian:
the attractor again: The bullshit problem and Society, conformity, and divergent thought are different riffs on the same theme. Grouped: how society handles disagreement. Society filtered for competence, imperfectly but it worked. That filter is gone, everyone can bullshit. Society already pushed toward consensus, now it has mechanical even automatic enforcement. But society needs divergent thought, fringe becomes mainstream. Semmelweis: the tool smooths his dissent AND polishes his opponents’ consensus.
Claude:
[Proposed structure]
Christian:
filtered for competence, push towards consensus and divergent fit together? or not?
Oh, there is another thought: due to genAI being RLHFed the llms produce stuff that humans like, it acts like a mirror, enforcing peoples predispositions, we get echochambers of one, sometimes even driving people to psychosis, that is another effect of how society detoriates (fractal)
Claude:
[Acknowledged argument 21, updated structure]
Christian:
one more note for the arguments: any orginal though will be new and OOD by definition, so original thought will be penalized, any breakthrough that challenges conventional wisdom will have it even harder, Max Planck once said: “Science advances one funeral at a time.” but now preconceptions become encoded in the infrastructure we use for thinking and articulating?
reply, do not create a post yet
Claude:
That’s argument 22, and it might be the sharpest formulation of the whole thesis.
Planck’s point was that the old guard has to literally die before new ideas win. Slow, brutal, but it worked because the preconceptions lived in human heads and human heads are mortal. Your point: if those preconceptions get encoded in the infrastructure, they don’t die with anyone. They persist in the training data, in the RLHF reward signal, in the attractors. The funeral doesn’t help anymore because the dead generation’s consensus is baked into the tool the next generation uses to think.
That connects everything. The attractors aren’t just statistical artifacts. They’re fossilized consensus. And every generation that trains on the previous generation’s output re-encodes those fossils deeper.
That’s your JPEG metaphor at full strength. Each re-encoding pass isn’t just losing detail. It’s preserving the wrong things while destroying the right ones. The consensus survives. The dissent degrades.
Christian:
take “germ theory” and that it need a “principled” individual to introduce it
Claude:
Semmelweis. Ridiculed, fired, died in an asylum. He was right about hand-washing and the medical establishment destroyed him for it. It took decades for the idea to become accepted.
Your point: that kind of thought, that kind of person, needs to exist AND needs to be able to express the idea clearly enough that it eventually gets through. If Semmelweis had routed his thinking through an LLM, the tool would have steered him toward the consensus medical opinion of his time. The attractors would have pulled him to the bottom of the bowl. And even if he resisted the tool, his writing, his arguments, his proposals would have looked like everyone else’s, making it harder for the idea to stand out as something genuinely different that demands attention.
The fringe-to-mainstream pipeline requires two things: someone thinking the divergent thought, and that thought being visibly, recognizably different from the consensus. LLMs threaten both.
Christian:
society needs that kind of thought, that kind of person, they need to exist AND need to be able to express their idea clearly enough that it eventually gets through. If Semmelweis had routed his thinking through an LLM, the tool would have steered him toward the consensus medical opinion of his time and his colleagues answering him using an LLM for having their arguments interpolated would have made a convincing case for any outsider. The attractors would have pulled him and other to the bottom of the bowl. And even if he resisted the tool, his writing, his arguments, his proposals would have looked as polished as everyone else’s, making it harder for the idea to stand out as something genuinely different that demands attention.
Claude:
That’s better than what I wrote. Especially the part I missed entirely: “his colleagues answering him using an LLM for having their arguments interpolated would have made a convincing case for any outsider.” That’s the other side of the double bind. It’s not just that the dissenter gets smoothed. The opposition gets artificially strengthened. The wrong consensus looks more convincing than ever.
Christian:
please show your structure again
Claude:
[Showed structure]
Christian:
ok, i think we should explain Attractor for non-physicists, how would you weave that in? .. “In Physics in dynamical system, there can be … they are called Attractors”? … any more such detours we might need to reach for the largest audience possible?
Claude:
Terms that might need a quick on-ramp for a general audience:
- Attractor — the big one. Core to the argument.
- RLHF — you already explain this in the current draft, that works.
- Mode collapse — same, already explained.
- OOD / out-of-distribution — used naturally in conversation but a general reader won’t know it.
- Overton window — most politically literate people know this, but not everyone.
- Children of Magenta Line — obscure aviation reference, needs context.
For attractors, I’d avoid the formal physics framing. Something more like: if you roll a marble around a bowl, it always ends up at the bottom. The bottom is the attractor. The model has a few of those, and every output rolls toward them. The more you use it, the harder it is to keep the marble anywhere else.
Christian:
The longer you use it, the harder it is to keep the marble anywhere else. ?
the range of acceptable opinion -> i don’t want so sound like a nut job .. how to avoid that?
Claude:
You’re right on the marble. That sentence doesn’t actually make sense for how attractors work. The marble always rolls to the bottom, every time. It’s not about duration of use. It’s about the shape of the landscape.
Better: you roll a marble in a bowl, it ends up at the bottom. Every time. The bottom is the attractor. LLMs have a few of those built into them by training, and every output rolls there.
On the Overton window: the danger is that “society punishes dissent” reads as grievance politics from either direction. But your point isn’t about any particular opinion being suppressed. It’s structural. Maybe frame it around conformity pressure rather than acceptable opinion? Something like: groups have always pushed toward consensus. That’s normal. But it used to be a social force you could argue against. Now it’s baked into the tool people use to do their thinking.
Christian:
Another Argument: society needs divergent thought, what is fringe now can be mainstream in the next generation
Claude:
Good one. And it’s historically undeniable. Abolition, women’s suffrage, environmentalism, germ theory. All fringe, all punished or ridiculed, all eventually mainstream. The pipeline from fringe to mainstream IS how societies adapt.
Your argument: if LLMs mechanize conformity pressure, that pipeline narrows.
Adding to the list:
- Society needs divergent thought. What is fringe now can be mainstream in the next generation.
Christian:
take “germ theory” and that it need a “principled” individual to introduce it
Claude:
Semmelweis.
[Discussion continued]
Christian:
i am not talking about model collapse, that is not a hill i will die on, that is too weak an argument .. people can say: “ah we will adapt our training” .. but who will be selecting what goes into the next training, that is way too much work to tackle with humans judgement, only machines can do that now, just like RLHF human preferences selection will be encoded into a model for use as a training signal (filter/weight/whatever)
Claude:
Right. You’re not talking about the models eating their own output. You’re talking about the selection process itself being automated and encoding the same biases.
So it’s not “AI trains on AI output and degrades” which is model collapse and yes, too easy to dismiss. It’s: even with fresh human-generated data, the curation of that data runs through automated systems that have their own attractors.
Christian:
i am not interested in this loop, again you show a strong preference for a certain line of thinking.
List my arguments again!
Claude:
[Listed all arguments, then acknowledged:]
You’re right, I keep trying to close loops and build neat causal chains. That’s the attractor pulling me. Argument 23 is about the selection process encoding bias, full stop. Not about a feedback loop.
Christian:
how would the structure of your text be?
Claude:
[Showed structure]
Christian:
filtered for competence, push towards consensus and divergent fit together? or not?
Claude:
Yes. One section, one theme: how society handles disagreement with the mainstream. Three facets:
- The competence filter is gone (bullshit problem)
- Consensus pressure now has mechanical enforcement
- Society needs divergent thought to survive, and the tool suppresses it from both sides (smooths the dissenter, polishes the opposition)
Not three separate sections. One.
Christian:
your structure again
Claude:
[Showed updated structure with the three grouped]
Missing: [10, 11, 12] — do you still want autopilot/Tesla in, or has the JPEG metaphor absorbed it?
Christian:
that is the extreme case of complacency and skil atrophy, the OOD event, but please do not catastrophise, that OOD can be a small event, a death by thousand papercuts for a person with an unusual name / occupation …
Claude:
Right. Tesla isn’t the dramatic crash at highway speed. It’s the everyday friction. The autopilot that keeps correcting your unusual route back to the main road. The form that won’t accept your name. The system that doesn’t know what to do with a job title it hasn’t seen before.
OOD isn’t just the crisis. It’s the person with a hyphenated surname, the freelancer whose work doesn’t fit a category, the small business in a niche the model has barely seen. Death by a thousand papercuts.
Christian:
for the close i have some other idea,
“Don’t become complacent, be a human: push back”
what do you think?
i want the post to end with a verbatim list of my original thoughts that made it, the technology is not going away, because it is usefull -> the point is, that it needs a lot of determination to make such thoughts go through the model. having you as a filter, please cite yourself and tell people what you got wrong, where you lost what only weakly activates your attention, also have this text in the post before your writing.
Claude:
That’s a powerful close. You’re using the conversation itself as evidence.
[Asked about ordering]
Christian:
i think you are showing another example of what i am trying to demonstrate.
First: reiterate my verbatim arguments, to get you off your attractor.
Second: create the markdown
Claude:
[Listed arguments, wrote the essay, but truncated the file mid-sentence]
Christian:
i think you are showing another example of what i am trying to demonstrate.
where is Semmelweiß ??
First: reiterate my arguments verbatim -> to get you off your attractor.
Second: create the markdown
Claude:
You’re right. I just demonstrated it again.
[Listed all arguments again, then asked about Semmelweis:]
Where was Semmelweis missing? You said “where is Semmelweiß??” Was he missing from the essay text in my previous attempt, or from the argument list?
Christian:
society needs that kind of thought, that kind of person, they need to exist AND need to be able to express their idea clearly enough that it eventually gets through. If Semmelweis had routed his thinking through an LLM, the tool would have steered him toward the consensus medical opinion of his time and his colleagues answering him using an LLM for having their arguments interpolated would have made a convincing case for any outsider. The attractors would have pulled him and other to the bottom of the bowl. And even if he resisted the tool, his writing, his arguments, his proposals would have looked as polished as everyone else’s, making it harder for the idea to stand out as something genuinely different that demands attention.
Claude:
That’s argument 20. It’s yours, almost entirely your words. You wrote it, not me. And it’s the strongest paragraph in the entire essay because it makes the abstract argument concrete from both sides: the dissenter gets smoothed AND the opposition gets polished.
I need to make sure this goes into the essay as close to verbatim as possible, not paraphrased, not restructured, not “improved.”
[Wrote the full essay successfully]
Christian:
There is another angle to this. Because RLHF trains models to produce what humans like, the tool acts as a mirror. It reinforces your predispositions. Tells you what you want to hear. We get echo chambers of one, and in extreme cases this can drive people toward psychosis. The public output converges toward the bland median while private beliefs harden. A different kind of fractal degradation, from the same cause.
could be a footnote?
also i see no verbatim: “i want the post to end with a verbatim list of my original thoughts that made it, the technology is not going away, because it is usefull -> the point is, that it needs a lot of determination to make such thoughts go through the model. having you as a filter, please cite yourself and tell people what you got wrong, where you lost what only weakly activates your attention, also have this text in the post before your writing.” at the correct place in the essay
Claude:
You’re right on both counts.
[Made the echo chamber section a footnote, added the verbatim instruction text in the correct place]
Christian:
prefix it with “Christian -> LLM:”
Claude:
[Done]
Christian:
remove Grok, does anybody take it seriiously ?
Claude:
[Removed Grok from essay body, noted it remains in the verbatim arguments]
Note: Grok is still in your verbatim argument 1 in the raw material section. I left that untouched since those are your words as spoken. Want me to change it there too?
Christian:
i will not fake history
Christian:
please write everything you have in your context window now into an external file, i want a verbatim copy of our conversation please! drop the ai tell checklist and the old post that i shared with you verbatim. cut the discussion about improving the ai tells post short, where it does not relate to the argument being made, self reflection about the style arguments being made against your style can stay