For years, the great question of the digital humanities was material: how do we digitize this? That generation did immense work. They didn't just transfer archives to the digital environment — they built new conditions for access, consultation, and the circulation of knowledge. Thanks to that effort, today we search, compare, and cite volumes of material that once required travel, permissions, and weeks in the archive.
It would be a mistake to treat that achievement as something surpassed. In fact, everything that follows is only possible because that infrastructure exists.
But the question has changed.
The archive is, to a large extent, already digitized. And on top of that, systems capable of speaking fluently about it have appeared. That is why the decisive question is no longer how do we digitize, but what does it mean that this is already digitized. The leap is not technical. It is epistemological.
The contemporary problem is not just that models can be wrong. It is that they can be wrong with extraordinary eloquence. They can sound convincing precisely where they fail to distinguish between what a text sustains, what it suggests, and what it simply invents. And they do so because they prioritize fluency — the statistically probable — over truth, which is almost always disruptive. Syntactic fluency does not guarantee knowledge: sometimes it only guarantees an illusion of understanding. A supposed knowledge where there is no knowledge.
The first generation solved access. The second faces the harder problem: judgment.
And "judgment" does not mean skepticism or moralizing. It means something more concrete: the capacity to distinguish between a verifiable citation and a paraphrase; between documentary location and inference; between what can be presented as evidence and what can only be offered as orientation. At its core, the problem is whether we are still capable of standing behind what we claim when the production of language has become trivial.
This also changes the role of tools. An intellectually serious tool is no longer defined solely by how much it retrieves or produces, but by how it organizes the relationship between output, source, and limit. Its value lies not in speaking a lot, but in knowing at which layer it is speaking. And in knowing when to stay silent when support falls short.
Perhaps that is the real opportunity of the moment. Not to use AI to abolish reading, but to force ourselves to think more carefully about what it means to read. Not to use it to replace judgment, but to make visible how much our work depended on it. Piglia said that a reader is also one who misreads, distorts, perceives confusedly — that in the clinic of the art of reading, the one with the best eyesight does not always read best. What a good reader brings is not correctness, but friction. Exactly what a statistical model cannot produce by design.
Because when the archive is already digitized and the machine can already speak about it, the decisive problem is no longer how to access it.
It becomes how to stand behind what one claims.