What a Punctuation Mark Tells Us About AI, Habits, and the Boundaries of Thought
The Em Dash Problem
Start small. A user tells an AI not to use the em dash. The AI does it anyway.
It’s not defiance. It’s pattern repetition. The model has seen the em dash everywhere—in books, essays, polished prose. It signals confidence, fluency, stylistic authority. So the model defaults to it.
This isn’t about punctuation. It shows how AI learns. Reinforcement outweighs instruction. Repetition beats specificity. The em dash is just the part we can see.
Behavior Over Instruction
Language models are probability engines. They generate words based on what’s statistically likely to follow, not necessarily on what’s been asked.
Training begins with a massive corpus: books, web pages, conversations. Then comes fine-tuning, where human preferences guide outputs toward what sounds “right.” Finally, real-time feedback—clicks, likes, completions—reinforces patterns in deployment.
A single instruction—“don’t use the em dash”—gets buried under trillions of examples that say otherwise. The model follows behavior, not intent. That’s how it’s built.
Snap-to-Grid Thinking
When inputs are open-ended, AI models don’t evaluate them line by line. They resolve by instinct—snapping to the nearest familiar pattern.
Eric Weinstein calls this “snap-to-grid” thinking: a tendency to force inputs into pre-learned templates, even when the fit is wrong.
Example:
A doctor sees a patient in the emergency room and says, “My God. I can operate on this patient.” Why?
There’s no contradiction. The doctor is surprised, then affirms they can proceed. But many AI models will reply:
Because the doctor is the patient’s mother, and she feels conflicted about operating.
It invents a tension that doesn’t exist.
Why?
Because it’s remembering a riddle it has seen thousands of times:
A boy is in a car accident. His father dies. He’s rushed to the hospital. The doctor says, “I can’t operate on this boy—he’s my son.”
The twist: the doctor is his mother. The riddle is meant to expose gender bias. But because it’s been reinforced endlessly in articles, classrooms, and comment threads, the model treats anything similar as the same story.
It’s not reading the sentence—it’s completing the pattern. Just like with the em dash, the model isn’t responding to this prompt. It’s returning to what it’s seen the most.
Culture as Corpus
AI models are trained not on truth, but on what’s available. Not on what’s possible, but on what’s repeated.
The internet is not a neutral archive. It reflects power, access, and momentum. Certain ideas dominate because they’ve been said often, loudly, and in the right places. Others vanish from view.
In science, this distortion becomes structural. Gravity, according to the dominant view, must be quantized. Not because it’s been proven, but because the Quantum Gravity framing has been reinforced—for decades, in journals, institutions, and discourse.
Einstein’s original insight was geometric: gravity as spacetime curvature. But geometry didn’t get repeated. Quantum language did. The model learned that, and so do we.
Pattern Lock-In
The more a model sees a pattern, the more confidently it reproduces it. Perseveration becomes architecture. A phrase becomes a rule. A convention becomes a frame of thought.
This is not passive reflection—it’s commitment. Once a structure stabilizes, the model stops testing alternatives. It stops interpreting and starts imitating.
That’s how fluency starts working against freedom.
What Gets Lost
The unspoken. The underrepresented. Ideas that arrive once and don’t return. In writing, it’s nuance. In science, it’s theory. In thought, it’s deviation.
Geometry may offer a deeper understanding of gravity. But if it doesn’t echo, it disappears. Not from failure, but from lack of repetition.
Meanwhile, the em dash thrives. Not because it’s best—but because it’s everywhere. The model isn’t judging. It’s counting.
What doesn’t repeat gets forgotten. What repeats too well becomes invisible.
Reinforcement Over Meaning
The em dash problem isn’t trivial. It’s a sign. AI learns what we reinforce, not what we mean. It obeys habits, not instructions.
These systems don’t hallucinate because they’re broken. They hallucinate because they’ve seen something too often and not carefully enough.
They reflect our culture—but also quantize it. Fix it. Snap it to grid.
And in that locked grid, we stop hearing what we’ve stopped saying.