I design AI infrastructures so that complex knowledge can be queried, verified, and traced — without algorithmic fluency destroying the difference between evidence, interpretation, and absence of basis.
In knowledge domains — philosophy, psychoanalysis, textual research — this is not just a precision error. It is the destruction of the difference between what can be cited, what can be oriented, and what should lead to abstention.
Generic AI assistants respond with authority even when evidence is weak, ambiguous, or entirely nonexistent. Ateneo addresses this problem with an architecture that treats abstention as a core system capacity, not as a failure.
Not another chatbot with a corpus. It is an architecture where citation verification is deterministic, documentary location is auditable, and the system explicitly distinguishes between what it can support and what it cannot.
Pure corpus search, zero LLM. What it returns cannot be fabricated — it comes directly from the database with page and document. Shielded by 7 verification guards.
It does not interpret. It helps find where a formulation is documentarily supported. Tracing back to corpus and source, with auditable traceability.
The core capacity. When there is not enough basis, the system does not fill the gap with fluency. It marks that there is no evidence. This is what almost nobody does today.
Marcus Aurelius, Epictetus, Seneca. Not a second product: the demonstration that the same pattern of contract, evaluation, and validation works in another domain.
Los modos que acabás de ver — Cita, Fuente, Abstention — forman el suelo firme. Encima, Ateneo permite una lectura abierta pero anclada. La apuesta es mantener apertura interpretativa sin perder anclaje textual.
Open reading that does not pretend to replace the researcher. It proposes articulations supported by visible evidence, with explicit limits when support falls short.
Deterministic citation verification and documentary location. What comes out of this layer cannot be fabricated — it comes directly from the database.
I come from philosophy and humanities research — and I arrived at natural language processing because I needed to solve a real problem: verifying citations against original texts in a corpus where LLM hallucination is unacceptable.
That forced me to build text processing pipelines, hybrid search systems (embeddings + FTS + trigram), parallel corpus management in 4 languages, and retrieval evaluation with real metrics. 125 SQL migrations. A production system that works.
That journey is what makes Ateneo hard to replicate. In STEM, verification has crutches: DOIs, structured APIs, rich metadata. In humanities, the corpus is ambiguous, editions vary, attributions circulate without a primary source. If the method works on Lacan — 80/80 — it has good reasons to work on any complex corpus.
Diagnostics, analysis, and questions on AI, humanities, and the university as institution. No hot takes — thinking with sources and theses.
Ten declarations on verification, abstention, and documentary reliability in the age of AI. With Borges, Kafka, Foucault, and Picasso.
On an accidental symposium, a question about the author, and a formula about love.
The problem with artificial intelligence in knowledge domains is not a problem of power. It is a problem of judgment. Four pieces, one argument.
Language models don't know how to stay silent. When they don't know, they talk more. Can deliberate silence — the kind born of judgment and not of ignorance — be instructed in an AI system?
The first generation of digital humanities solved access. The second faces the harder problem: judgment. The leap is not technical — it is epistemological.
Language models master correctness. But what makes creative work valuable is not the most probable, but the most precise. And often the most precise thing is what one chooses not to include.
In one week, two tools appeared that turn any topic into personalized learning for less than two dollars. The obvious reaction is to ask how to compete with them. The right question is different.
I am interested in speaking with people working in research, libraries, cultural heritage, or AI applied to domains where precision matters.