Bolaño argued that great literature is not a matter of style or grammar. That placing the right words in the right place is merely correctness. That what distinguishes a classic is something else: clairvoyance. A lucid reading of the canon that is at the same time a time bomb.
It strikes me as an exact metaphor for what is happening today with generative AI.
Language models master correctness: they produce coherent text, working code, answers that sound right. They place the right words in the right place. But they do so by calculating what is most probable — the statistical average of everything they have read.
And there lies the problem. What makes creative work valuable is not the most probable, but the most precise. And often the most precise thing is what one chooses not to include. Restraint. Deliberate silence. The empty space that gives weight to what remains.
Models add by default. They append. They complete. But they don't know how to subtract with intention. They don't distinguish between something missing by oversight and something missing on purpose. For them, an absence is always a gap to fill, never a decision.
That is why two AIs from different companies, with different architectures, produce nearly identical results: both converge to the average of their data. They read the entire canonical tree, but they don't produce time bombs. They produce the statistical summary of the tree.
Clairvoyance — knowing what to leave out, what to break, what to restrain — cannot be trained with more data. It is instructed with more precision. Or simply possessed.
That, for now, is still ours.